Class-incremental learning: survey and performance evaluation on image
classification
- URL: http://arxiv.org/abs/2010.15277v2
- Date: Thu, 6 May 2021 21:30:23 GMT
- Title: Class-incremental learning: survey and performance evaluation on image
classification
- Authors: Marc Masana, Xialei Liu, Bartlomiej Twardowski, Mikel Menta, Andrew D.
Bagdanov, Joost van de Weijer
- Abstract summary: Incremental learning allows for efficient resource usage by eliminating the need to retrain from scratch at the arrival of new data.
The main challenge for incremental learning is catastrophic forgetting, which refers to the precipitous drop in performance on previously learned tasks after learning a new one.
Recently, we have seen a shift towards class-incremental learning where the learner must discriminate at inference time between all classes seen in previous tasks without recourse to a task-ID.
- Score: 38.27344435075399
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For future learning systems incremental learning is desirable, because it
allows for: efficient resource usage by eliminating the need to retrain from
scratch at the arrival of new data; reduced memory usage by preventing or
limiting the amount of data required to be stored -- also important when
privacy limitations are imposed; and learning that more closely resembles human
learning. The main challenge for incremental learning is catastrophic
forgetting, which refers to the precipitous drop in performance on previously
learned tasks after learning a new one. Incremental learning of deep neural
networks has seen explosive growth in recent years. Initial work focused on
task-incremental learning, where a task-ID is provided at inference time.
Recently, we have seen a shift towards class-incremental learning where the
learner must discriminate at inference time between all classes seen in
previous tasks without recourse to a task-ID. In this paper, we provide a
complete survey of existing class-incremental learning methods for image
classification, and in particular we perform an extensive experimental
evaluation on thirteen class-incremental methods. We consider several new
experimental scenarios, including a comparison of class-incremental methods on
multiple large-scale image classification datasets, investigation into small
and large domain shifts, and comparison of various network architectures.
Related papers
- Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - A Multi-label Continual Learning Framework to Scale Deep Learning
Approaches for Packaging Equipment Monitoring [57.5099555438223]
We study multi-label classification in the continual scenario for the first time.
We propose an efficient approach that has a logarithmic complexity with regard to the number of tasks.
We validate our approach on a real-world multi-label Forecasting problem from the packaging industry.
arXiv Detail & Related papers (2022-08-08T15:58:39Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - DIODE: Dilatable Incremental Object Detection [15.59425584971872]
Conventional deep learning models lack the capability of preserving previously learned knowledge.
We propose a dilatable incremental object detector (DIODE) for multi-step incremental detection tasks.
Our method achieves up to 6.4% performance improvement by increasing the number of parameters by just 1.2% for each newly learned task.
arXiv Detail & Related papers (2021-08-12T09:45:57Z) - On the importance of cross-task features for class-incremental learning [14.704888854064501]
In class-incremental learning, an agent with limited resources needs to learn a sequence of classification tasks.
The main difference with task-incremental learning, where a task-ID is available at inference time, is that the learner also needs to perform cross-task discrimination.
arXiv Detail & Related papers (2021-06-22T17:03:15Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - On the Exploration of Incremental Learning for Fine-grained Image
Retrieval [45.48333682748607]
We consider the problem of fine-grained image retrieval in an incremental setting, when new categories are added over time.
We propose an incremental learning method to mitigate retrieval performance degradation caused by the forgetting issue.
Our method effectively mitigates the catastrophic forgetting on the original classes while achieving high performance on the new classes.
arXiv Detail & Related papers (2020-10-15T21:07:44Z) - Self-Supervised Learning Aided Class-Incremental Lifelong Learning [17.151579393716958]
We study the issue of catastrophic forgetting in class-incremental learning (Class-IL)
In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification.
We propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem.
arXiv Detail & Related papers (2020-06-10T15:15:27Z) - Semantic Drift Compensation for Class-Incremental Learning [48.749630494026086]
Class-incremental learning of deep networks sequentially increases the number of classes to be classified.
We propose a new method to estimate the drift, called semantic drift, of features and compensate for it without the need of any exemplars.
arXiv Detail & Related papers (2020-04-01T13:31:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.