Two-Level Residual Distillation based Triple Network for Incremental
Object Detection
- URL: http://arxiv.org/abs/2007.13428v1
- Date: Mon, 27 Jul 2020 11:04:57 GMT
- Title: Two-Level Residual Distillation based Triple Network for Incremental
Object Detection
- Authors: Dongbao Yang, Yu Zhou, Dayan Wu, Can Ma, Fei Yang, Weiping Wang
- Abstract summary: We propose a novel incremental object detector based on Faster R-CNN to continuously learn from new object classes without using old data.
It is a triple network where an old model and a residual model as assistants for helping the incremental model learning on new classes without forgetting the previous learned knowledge.
- Score: 21.725878050355824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern object detection methods based on convolutional neural network suffer
from severe catastrophic forgetting in learning new classes without original
data. Due to time consumption, storage burden and privacy of old data, it is
inadvisable to train the model from scratch with both old and new data when new
object classes emerge after the model trained. In this paper, we propose a
novel incremental object detector based on Faster R-CNN to continuously learn
from new object classes without using old data. It is a triple network where an
old model and a residual model as assistants for helping the incremental model
learning on new classes without forgetting the previous learned knowledge. To
better maintain the discrimination of features between old and new classes, the
residual model is jointly trained on new classes in the incremental learning
procedure. In addition, a corresponding distillation scheme is designed to
guide the training process, which consists of a two-level residual distillation
loss and a joint classification distillation loss. Extensive experiments on
VOC2007 and COCO are conducted, and the results demonstrate that the proposed
method can effectively learn to incrementally detect objects of new classes,
and the problem of catastrophic forgetting is mitigated in this context.
Related papers
- BSDP: Brain-inspired Streaming Dual-level Perturbations for Online Open
World Object Detection [31.467501311528498]
We aim to make deep learning models simulate the way people learn.
Existing OWOD approaches pay more attention to the identification of unknown categories, while the incremental learning part is also very important.
In this paper, we take the dual-level information of old samples as perturbations on new samples to make the model good at learning new knowledge without forgetting the old knowledge.
arXiv Detail & Related papers (2024-03-05T04:00:50Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - FOSTER: Feature Boosting and Compression for Class-Incremental Learning [52.603520403933985]
Deep neural networks suffer from catastrophic forgetting when learning new categories.
We propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively.
arXiv Detail & Related papers (2022-04-10T11:38:33Z) - Online Deep Metric Learning via Mutual Distillation [9.363111089877625]
Deep metric learning aims to transform input data into an embedding space, where similar samples are close while dissimilar samples are far apart from each other.
Existing solutions either retrain the model from scratch or require the replay of old samples during the training.
This paper proposes a complete online deep metric learning framework based on mutual distillation for both one-task and multi-task scenarios.
arXiv Detail & Related papers (2022-03-10T07:24:36Z) - Static-Dynamic Co-Teaching for Class-Incremental 3D Object Detection [71.18882803642526]
Deep learning approaches have shown remarkable performance in the 3D object detection task.
They suffer from a catastrophic performance drop when incrementally learning new classes without revisiting the old data.
This "catastrophic forgetting" phenomenon impedes the deployment of 3D object detection approaches in real-world scenarios.
We present the first solution - SDCoT, a novel static-dynamic co-teaching method.
arXiv Detail & Related papers (2021-12-14T09:03:41Z) - Bridging Non Co-occurrence with Unlabeled In-the-wild Data for
Incremental Object Detection [56.22467011292147]
Several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection.
Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novel classes.
We propose the use of unlabeled in-the-wild data to bridge the non-occurrence caused by the missing base classes during the training of additional novel classes.
arXiv Detail & Related papers (2021-10-28T10:57:25Z) - Multi-View Correlation Distillation for Incremental Object Detection [12.536640582318949]
We propose a novel textbfMulti-textbfView textbfCorrelation textbfDistillation (MVCD) based incremental object detection method.
arXiv Detail & Related papers (2021-07-05T04:36:33Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.