Towards Class-incremental Object Detection with Nearest Mean of
Exemplars
- URL: http://arxiv.org/abs/2008.08336v3
- Date: Fri, 9 Oct 2020 07:23:34 GMT
- Title: Towards Class-incremental Object Detection with Nearest Mean of
Exemplars
- Authors: Sheng Ren, Yan He, Neal N. Xiong and Kehua Guo
- Abstract summary: Incremental learning can modify the parameters and structure of the deep learning model so that the model does not forget the old knowledge while learning new knowledge.
This paper proposes a kind of incremental method, which adjusts the parameters of the model by identifying the prototype vector and increasing the distance of the vector.
- Score: 5.546052390414686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Incremental learning is a form of online learning. Incremental learning can
modify the parameters and structure of the deep learning model so that the
model does not forget the old knowledge while learning new knowledge.
Preventing catastrophic forgetting is the most important task of incremental
learning. However, the current incremental learning is often only for one type
of input. For example, if the input images are of the same type, the current
incremental model can learn new knowledge while not forgetting old knowledge.
However, if several categories are added to the input graphics, the current
model will not be able to deal with it correctly, and the accuracy will drop
significantly. Therefore, this paper proposes a kind of incremental method,
which adjusts the parameters of the model by identifying the prototype vector
and increasing the distance of the vector, so that the model can learn new
knowledge without catastrophic forgetting. Experiments show the effectiveness
of our proposed method.
Related papers
- Learning Causal Features for Incremental Object Detection [12.255977992587596]
We propose an incremental causal object detection (ICOD) model by learning causal features, which can adapt to more tasks.
Our ICOD is introduced to learn the causal features, rather than the data-bias features when training the detector.
arXiv Detail & Related papers (2024-03-01T15:14:43Z) - Class incremental learning with probability dampening and cascaded gated classifier [4.285597067389559]
We propose a novel incremental regularisation approach called Margin Dampening and Cascaded Scaling.
The first combines a soft constraint and a knowledge distillation approach to preserve past knowledge while allowing forgetting new patterns.
We empirically show that our approach performs well on multiple benchmarks well-established baselines.
arXiv Detail & Related papers (2024-02-02T09:33:07Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Class Incremental Learning with Self-Supervised Pre-Training and
Prototype Learning [21.901331484173944]
We analyze the causes of catastrophic forgetting in class incremental learning.
We propose a two-stage learning framework with a fixed encoder and an incrementally updated prototype classifier.
Our method does not rely on preserved samples of old classes, is thus a non-exemplar based CIL method.
arXiv Detail & Related papers (2023-08-04T14:20:42Z) - Learning to Jump: Thinning and Thickening Latent Counts for Generative
Modeling [69.60713300418467]
Learning to jump is a general recipe for generative modeling of various types of data.
We demonstrate when learning to jump is expected to perform comparably to learning to denoise, and when it is expected to perform better.
arXiv Detail & Related papers (2023-05-28T05:38:28Z) - Remind of the Past: Incremental Learning with Analogical Prompts [30.333352182303038]
We design an analogy-making mechanism to remap the new data into the old class by prompt tuning.
It mimics the feature distribution of the target old class on the old model using only samples of new classes.
The learnt prompts are further used to estimate and counteract the representation shift caused by fine-tuning for the historical prototypes.
arXiv Detail & Related papers (2023-03-24T10:18:28Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z) - Self-Supervised Learning Aided Class-Incremental Lifelong Learning [17.151579393716958]
We study the issue of catastrophic forgetting in class-incremental learning (Class-IL)
In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification.
We propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem.
arXiv Detail & Related papers (2020-06-10T15:15:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.