Incremental Learning In Online Scenario
- URL: http://arxiv.org/abs/2003.13191v2
- Date: Mon, 19 Apr 2021 00:50:31 GMT
- Title: Incremental Learning In Online Scenario
- Authors: Jiangpeng He, Runyu Mao, Zeman Shao and Fengqing Zhu
- Abstract summary: Current state-of-the-art incremental learning methods require a long time to train the model whenever new classes are added.
We propose an incremental learning framework that can work in the challenging online learning scenario.
- Score: 8.885829189810197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern deep learning approaches have achieved great success in many vision
applications by training a model using all available task-specific data.
However, there are two major obstacles making it challenging to implement for
real life applications: (1) Learning new classes makes the trained model
quickly forget old classes knowledge, which is referred to as catastrophic
forgetting. (2) As new observations of old classes come sequentially over time,
the distribution may change in unforeseen way, making the performance degrade
dramatically on future data, which is referred to as concept drift. Current
state-of-the-art incremental learning methods require a long time to train the
model whenever new classes are added and none of them takes into consideration
the new observations of old classes. In this paper, we propose an incremental
learning framework that can work in the challenging online learning scenario
and handle both new classes data and new observations of old classes. We
address problem (1) in online mode by introducing a modified cross-distillation
loss together with a two-step learning technique. Our method outperforms the
results obtained from current state-of-the-art offline incremental learning
methods on the CIFAR-100 and ImageNet-1000 (ILSVRC 2012) datasets under the
same experiment protocol but in online scenario. We also provide a simple yet
effective method to mitigate problem (2) by updating exemplar set using the
feature of each new observation of old classes and demonstrate a real life
application of online food image classification based on our complete framework
using the Food-101 dataset.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Class Incremental Learning with Self-Supervised Pre-Training and
Prototype Learning [21.901331484173944]
We analyze the causes of catastrophic forgetting in class incremental learning.
We propose a two-stage learning framework with a fixed encoder and an incrementally updated prototype classifier.
Our method does not rely on preserved samples of old classes, is thus a non-exemplar based CIL method.
arXiv Detail & Related papers (2023-08-04T14:20:42Z) - PIVOT: Prompting for Video Continual Learning [50.80141083993668]
We introduce PIVOT, a novel method that leverages extensive knowledge in pre-trained models from the image domain.
Our experiments show that PIVOT improves state-of-the-art methods by a significant 27% on the 20-task ActivityNet setup.
arXiv Detail & Related papers (2022-12-09T13:22:27Z) - Prototypical quadruplet for few-shot class incremental learning [24.814045065163135]
We propose a novel method that improves classification robustness by identifying a better embedding space using an improved contrasting loss.
Our approach retains previously acquired knowledge in the embedding space, even when trained with new classes.
We demonstrate the effectiveness of our method by showing that the embedding space remains intact after training the model with new classes and outperforms existing state-of-the-art algorithms in terms of accuracy across different sessions.
arXiv Detail & Related papers (2022-11-05T17:19:14Z) - Knowledge Consolidation based Class Incremental Online Learning with
Limited Data [41.87919913719975]
We propose a novel approach for class incremental online learning in a limited data setting.
We learn robust representations that are generalizable across tasks without suffering from the problems of catastrophic forgetting and overfitting.
arXiv Detail & Related papers (2021-06-12T15:18:29Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z) - Move-to-Data: A new Continual Learning approach with Deep CNNs,
Application for image-class recognition [0.0]
It is necessary to pre-train the model at a "training recording phase" and then adjust it to the new coming data.
We propose a fast continual learning layer at the end of the neuronal network.
arXiv Detail & Related papers (2020-06-12T13:04:58Z) - Self-Supervised Learning Aided Class-Incremental Lifelong Learning [17.151579393716958]
We study the issue of catastrophic forgetting in class-incremental learning (Class-IL)
In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification.
We propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem.
arXiv Detail & Related papers (2020-06-10T15:15:27Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.