A Comparative Study of Calibration Methods for Imbalanced Class
Incremental Learning
- URL: http://arxiv.org/abs/2202.00386v1
- Date: Tue, 1 Feb 2022 12:56:17 GMT
- Title: A Comparative Study of Calibration Methods for Imbalanced Class
Incremental Learning
- Authors: Umang Aggarwal, Adrian Popescu, Eden Belouadah and C\'eline Hudelot
- Abstract summary: We study the problem of learning incrementally from imbalanced datasets.
We use a bounded memory to store exemplars of old classes across incremental states.
We show that simpler vanilla fine tuning is a stronger backbone for imbalanced incremental learning algorithms.
- Score: 10.680349952226935
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning approaches are successful in a wide range of AI problems and in
particular for visual recognition tasks. However, there are still open problems
among which is the capacity to handle streams of visual information and the
management of class imbalance in datasets. Existing research approaches these
two problems separately while they co-occur in real world applications. Here,
we study the problem of learning incrementally from imbalanced datasets. We
focus on algorithms which have a constant deep model complexity and use a
bounded memory to store exemplars of old classes across incremental states.
Since memory is bounded, old classes are learned with fewer images than new
classes and an imbalance due to incremental learning is added to the initial
dataset imbalance. A score prediction bias in favor of new classes appears and
we evaluate a comprehensive set of score calibration methods to reduce it.
Evaluation is carried with three datasets, using two dataset imbalance
configurations and three bounded memory sizes. Results show that most
calibration methods have beneficial effect and that they are most useful for
lower bounded memory sizes, which are most interesting in practice. As a
secondary contribution, we remove the usual distillation component from the
loss function of incremental learning algorithms. We show that simpler vanilla
fine tuning is a stronger backbone for imbalanced incremental learning
algorithms.
Related papers
- Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Neural Collapse Terminus: A Unified Solution for Class Incremental
Learning and Its Variants [166.916517335816]
In this paper, we offer a unified solution to the misalignment dilemma in the three tasks.
We propose neural collapse terminus that is a fixed structure with the maximal equiangular inter-class separation for the whole label space.
Our method holds the neural collapse optimality in an incremental fashion regardless of data imbalance or data scarcity.
arXiv Detail & Related papers (2023-08-03T13:09:59Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - ClaRe: Practical Class Incremental Learning By Remembering Previous
Class Representations [9.530976792843495]
Class Incremental Learning (CIL) tends to learn new concepts perfectly, but not at the expense of performance and accuracy for old data.
ClaRe is an efficient solution for CIL by remembering the representations of learned classes in each increment.
ClaRe has a better generalization than prior methods thanks to producing diverse instances from the distribution of previously learned classes.
arXiv Detail & Related papers (2021-03-29T10:39:42Z) - Long-Tailed Recognition Using Class-Balanced Experts [128.73438243408393]
We propose an ensemble of class-balanced experts that combines the strength of diverse classifiers.
Our ensemble of class-balanced experts reaches results close to state-of-the-art and an extended ensemble establishes a new state-of-the-art on two benchmarks for long-tailed recognition.
arXiv Detail & Related papers (2020-04-07T20:57:44Z) - ScaIL: Classifier Weights Scaling for Class Incremental Learning [12.657788362927834]
In a deep learning approach, the constant computational budget requires the use of a fixed architecture for all incremental states.
The bounded memory generates data imbalance in favor of new classes and a prediction bias toward them appears.
We propose simple but efficient scaling of past class classifier weights to make them more comparable to those of new classes.
arXiv Detail & Related papers (2020-01-16T12:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.