Initial Classifier Weights Replay for Memoryless Class Incremental
Learning
- URL: http://arxiv.org/abs/2008.13710v1
- Date: Mon, 31 Aug 2020 16:18:12 GMT
- Title: Initial Classifier Weights Replay for Memoryless Class Incremental
Learning
- Authors: Eden Belouadah, Adrian Popescu, Ioannis Kanellos
- Abstract summary: Incremental Learning (IL) is useful when artificial systems need to deal with streams of data and do not have access to all data at all times.
We propose a different approach based on a vanilla fine tuning backbone.
We conduct a thorough evaluation with four public datasets in a memoryless incremental learning setting.
- Score: 11.230170401360633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incremental Learning (IL) is useful when artificial systems need to deal with
streams of data and do not have access to all data at all times. The most
challenging setting requires a constant complexity of the deep model and an
incremental model update without access to a bounded memory of past data. Then,
the representations of past classes are strongly affected by catastrophic
forgetting. To mitigate its negative effect, an adapted fine tuning which
includes knowledge distillation is usually deployed. We propose a different
approach based on a vanilla fine tuning backbone. It leverages initial
classifier weights which provide a strong representation of past classes
because they are trained with all class data. However, the magnitude of
classifiers learned in different states varies and normalization is needed for
a fair handling of all classes. Normalization is performed by standardizing the
initial classifier weights, which are assumed to be normally distributed. In
addition, a calibration of prediction scores is done by using state level
statistics to further improve classification fairness. We conduct a thorough
evaluation with four public datasets in a memoryless incremental learning
setting. Results show that our method outperforms existing techniques by a
large margin for large-scale datasets.
Related papers
- Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - Class Impression for Data-free Incremental Learning [20.23329169244367]
Deep learning-based classification approaches require collecting all samples from all classes in advance and are trained offline.
This paradigm may not be practical in real-world clinical applications, where new classes are incrementally introduced through the addition of new data.
We propose a novel data-free class incremental learning framework that first synthesizes data from the model trained on previous classes to generate a ours.
arXiv Detail & Related papers (2022-06-26T06:20:17Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Latent Vector Expansion using Autoencoder for Anomaly Detection [1.370633147306388]
We use the features of the autoencoder to train latent vectors from low to high dimensionality.
We propose a latent vector expansion autoencoder model that improves classification performance at imbalanced data.
arXiv Detail & Related papers (2022-01-05T02:28:38Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - Class-incremental Learning using a Sequence of Partial Implicitly
Regularized Classifiers [0.0]
In class-incremental learning, the objective is to learn a number of classes sequentially without having access to the whole training data.
Our experiments on CIFAR100 dataset show that the proposed method improves the performance over SOTA by a large margin.
arXiv Detail & Related papers (2021-04-04T10:02:45Z) - Feature Space Augmentation for Long-Tailed Data [74.65615132238291]
Real-world data often follow a long-tailed distribution as the frequency of each class is typically different.
Class-balanced loss and advanced methods on data re-sampling and augmentation are among the best practices to alleviate the data imbalance problem.
We present a novel approach to address the long-tailed problem by augmenting the under-represented classes in the feature space with the features learned from the classes with ample samples.
arXiv Detail & Related papers (2020-08-09T06:38:00Z) - ScaIL: Classifier Weights Scaling for Class Incremental Learning [12.657788362927834]
In a deep learning approach, the constant computational budget requires the use of a fixed architecture for all incremental states.
The bounded memory generates data imbalance in favor of new classes and a prediction bias toward them appears.
We propose simple but efficient scaling of past class classifier weights to make them more comparable to those of new classes.
arXiv Detail & Related papers (2020-01-16T12:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.