ScaIL: Classifier Weights Scaling for Class Incremental Learning
- URL: http://arxiv.org/abs/2001.05755v1
- Date: Thu, 16 Jan 2020 12:10:45 GMT
- Title: ScaIL: Classifier Weights Scaling for Class Incremental Learning
- Authors: Eden Belouadah and Adrian Popescu
- Abstract summary: In a deep learning approach, the constant computational budget requires the use of a fixed architecture for all incremental states.
The bounded memory generates data imbalance in favor of new classes and a prediction bias toward them appears.
We propose simple but efficient scaling of past class classifier weights to make them more comparable to those of new classes.
- Score: 12.657788362927834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incremental learning is useful if an AI agent needs to integrate data from a
stream. The problem is non trivial if the agent runs on a limited computational
budget and has a bounded memory of past data. In a deep learning approach, the
constant computational budget requires the use of a fixed architecture for all
incremental states. The bounded memory generates data imbalance in favor of new
classes and a prediction bias toward them appears. This bias is commonly
countered by introducing a data balancing step in addition to the basic network
training. We depart from this approach and propose simple but efficient scaling
of past class classifier weights to make them more comparable to those of new
classes. Scaling exploits incremental state level statistics and is applied to
the classifiers learned in the initial state of classes in order to profit from
all their available data. We also question the utility of the widely used
distillation loss component of incremental learning algorithms by comparing it
to vanilla fine tuning in presence of a bounded memory. Evaluation is done
against competitive baselines using four public datasets. Results show that the
classifier weights scaling and the removal of the distillation are both
beneficial.
Related papers
- Online Nonparametric Supervised Learning for Massive Data [0.0]
We develop a fast algorithm adapted to the real-time calculation of the nonparametric classifier in massive as well as streaming data frameworks.
The proposed methods are evaluated and compared to some commonly used machine learning algorithms for real-time fetal well-being monitoring.
arXiv Detail & Related papers (2024-05-29T20:04:23Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning [53.32576252950481]
Continual learning aims to enable a model to incrementally learn knowledge from sequentially arrived data.
In this paper, we propose a non-incremental learner, named AttriCLIP, to incrementally extract knowledge of new classes or tasks.
arXiv Detail & Related papers (2023-05-19T07:39:17Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - A Comparative Study of Calibration Methods for Imbalanced Class
Incremental Learning [10.680349952226935]
We study the problem of learning incrementally from imbalanced datasets.
We use a bounded memory to store exemplars of old classes across incremental states.
We show that simpler vanilla fine tuning is a stronger backbone for imbalanced incremental learning algorithms.
arXiv Detail & Related papers (2022-02-01T12:56:17Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - ClaRe: Practical Class Incremental Learning By Remembering Previous
Class Representations [9.530976792843495]
Class Incremental Learning (CIL) tends to learn new concepts perfectly, but not at the expense of performance and accuracy for old data.
ClaRe is an efficient solution for CIL by remembering the representations of learned classes in each increment.
ClaRe has a better generalization than prior methods thanks to producing diverse instances from the distribution of previously learned classes.
arXiv Detail & Related papers (2021-03-29T10:39:42Z) - Initial Classifier Weights Replay for Memoryless Class Incremental
Learning [11.230170401360633]
Incremental Learning (IL) is useful when artificial systems need to deal with streams of data and do not have access to all data at all times.
We propose a different approach based on a vanilla fine tuning backbone.
We conduct a thorough evaluation with four public datasets in a memoryless incremental learning setting.
arXiv Detail & Related papers (2020-08-31T16:18:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.