DILF-EN framework for Class-Incremental Learning
- URL: http://arxiv.org/abs/2112.12385v1
- Date: Thu, 23 Dec 2021 06:49:24 GMT
- Title: DILF-EN framework for Class-Incremental Learning
- Authors: Mohammed Asad Karim, Indu Joshi, Pratik Mazumder, Pravendra Singh
- Abstract summary: We show that the effect of catastrophic forgetting on the model prediction varies with the change in orientation of the same image.
We propose a novel data-ensemble approach that combines the predictions for the different orientations of the image.
We also propose a novel dual-incremental learning framework that involves jointly training the network with two incremental learning objectives.
- Score: 9.969403314560179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models suffer from catastrophic forgetting of the classes in
the older phases as they get trained on the classes introduced in the new phase
in the class-incremental learning setting. In this work, we show that the
effect of catastrophic forgetting on the model prediction varies with the
change in orientation of the same image, which is a novel finding. Based on
this, we propose a novel data-ensemble approach that combines the predictions
for the different orientations of the image to help the model retain further
information regarding the previously seen classes and thereby reduce the effect
of forgetting on the model predictions. However, we cannot directly use the
data-ensemble approach if the model is trained using traditional techniques.
Therefore, we also propose a novel dual-incremental learning framework that
involves jointly training the network with two incremental learning objectives,
i.e., the class-incremental learning objective and our proposed
data-incremental learning objective. In the dual-incremental learning
framework, each image belongs to two classes, i.e., the image class (for
class-incremental learning) and the orientation class (for data-incremental
learning). In class-incremental learning, each new phase introduces a new set
of classes, and the model cannot access the complete training data from the
older phases. In our proposed data-incremental learning, the orientation
classes remain the same across all the phases, and the data introduced by the
new phase in class-incremental learning acts as new training data for these
orientation classes. We empirically demonstrate that the dual-incremental
learning framework is vital to the data-ensemble approach. We apply our
proposed approach to state-of-the-art class-incremental learning methods and
empirically show that our framework significantly improves the performance of
these methods.
Related papers
- Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Robust Feature Learning and Global Variance-Driven Classifier Alignment
for Long-Tail Class Incremental Learning [20.267257778779992]
This paper introduces a two-stage framework designed to enhance long-tail class incremental learning.
We address the challenge posed by the under-representation of tail classes in long-tail class incremental learning.
The proposed framework can seamlessly integrate as a module with any class incremental learning method.
arXiv Detail & Related papers (2023-11-02T13:28:53Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Learning What Not to Segment: A New Perspective on Few-Shot Segmentation [63.910211095033596]
Recently few-shot segmentation (FSS) has been extensively developed.
This paper proposes a fresh and straightforward insight to alleviate the problem.
In light of the unique nature of the proposed approach, we also extend it to a more realistic but challenging setting.
arXiv Detail & Related papers (2022-03-15T03:08:27Z) - Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning [141.35105358670316]
We study the difference between a na"ively-trained initial-phase model and the oracle model.
We propose Class-wise Decorrelation (CwD) that effectively regularizes representations of each class to scatter more uniformly.
Our CwD is simple to implement and easy to plug into existing methods.
arXiv Detail & Related papers (2021-12-09T07:20:32Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - Move-to-Data: A new Continual Learning approach with Deep CNNs,
Application for image-class recognition [0.0]
It is necessary to pre-train the model at a "training recording phase" and then adjust it to the new coming data.
We propose a fast continual learning layer at the end of the neuronal network.
arXiv Detail & Related papers (2020-06-12T13:04:58Z) - Incremental Learning In Online Scenario [8.885829189810197]
Current state-of-the-art incremental learning methods require a long time to train the model whenever new classes are added.
We propose an incremental learning framework that can work in the challenging online learning scenario.
arXiv Detail & Related papers (2020-03-30T02:24:26Z) - Cognitively-Inspired Model for Incremental Learning Using a Few Examples [11.193504036335503]
Incremental learning attempts to develop a classifier which learns continuously from a stream of data segregated into different classes.
Deep learning approaches suffer from catastrophic forgetting when learning classes incrementally, while most incremental learning approaches require a large amount of training data per class.
We propose a novel approach inspired by the concept learning model of the hippocampus and the neocortex that represents each image class as centroids.
arXiv Detail & Related papers (2020-02-27T19:52:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.