Response-based Distillation for Incremental Object Detection
- URL: http://arxiv.org/abs/2110.13471v1
- Date: Tue, 26 Oct 2021 08:07:55 GMT
- Title: Response-based Distillation for Incremental Object Detection
- Authors: Tao Feng, Mang Wang
- Abstract summary: Traditional object detection are ill-equipped for incremental learning.
Fine-tuning directly on a well-trained detection model with only new data will leads to catastrophic forgetting.
We propose a fully response-based incremental distillation method focusing on learning response from detection bounding boxes and classification predictions.
- Score: 2.337183337110597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional object detection are ill-equipped for incremental learning.
However, fine-tuning directly on a well-trained detection model with only new
data will leads to catastrophic forgetting. Knowledge distillation is a
straightforward way to mitigate catastrophic forgetting. In Incremental Object
Detection (IOD), previous work mainly focuses on feature-level knowledge
distillation, but the different response of detector has not been fully
explored yet. In this paper, we propose a fully response-based incremental
distillation method focusing on learning response from detection bounding boxes
and classification predictions. Firstly, our method transferring category
knowledge while equipping student model with the ability to retain localization
knowledge during incremental learning. In addition, we further evaluate the
qualities of all locations and provides valuable response by adaptive
pseudo-label selection (APS) strategies. Finally, we elucidate that knowledge
from different responses should be assigned with different importance during
incremental distillation. Extensive experiments conducted on MS COCO
demonstrate significant advantages of our method, which substantially narrow
the performance gap towards full training.
Related papers
- Task Integration Distillation for Object Detectors [2.974025533366946]
We propose a knowledge distillation method that addresses both the classification and regression tasks.
We evaluate the importance of features based on the output of the detector's two sub-tasks.
This method effectively prevents the issue of biased predictions about the model's learning reality.
arXiv Detail & Related papers (2024-04-02T07:08:15Z) - Class-aware Information for Logit-based Knowledge Distillation [16.634819319915923]
We propose a Class-aware Logit Knowledge Distillation (CLKD) method, that extents the logit distillation in both instance-level and class-level.
CLKD enables the student model mimic higher semantic information from the teacher model, hence improving the distillation performance.
arXiv Detail & Related papers (2022-11-27T09:27:50Z) - Exploring Inconsistent Knowledge Distillation for Object Detection with
Data Augmentation [66.25738680429463]
Knowledge Distillation (KD) for object detection aims to train a compact detector by transferring knowledge from a teacher model.
We propose inconsistent knowledge distillation (IKD) which aims to distill knowledge inherent in the teacher model's counter-intuitive perceptions.
Our method outperforms state-of-the-art KD baselines on one-stage, two-stage and anchor-free object detectors.
arXiv Detail & Related papers (2022-09-20T16:36:28Z) - Localization Distillation for Object Detection [134.12664548771534]
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation instead of mimicking the classification logits.
We present a novel localization distillation (LD) method which can efficiently transfer the localization knowledge from the teacher to the student.
We show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking underperforms for years.
arXiv Detail & Related papers (2022-04-12T17:14:34Z) - Overcoming Catastrophic Forgetting in Incremental Object Detection via
Elastic Response Distillation [4.846235640334886]
Traditional object detectors are ill-equipped for incremental learning.
Fine-tuning directly on a well-trained detection model with only new data will lead to catastrophic forgetting.
We propose a response-based incremental distillation method, dubbed Elastic Response Distillation (ERD)
arXiv Detail & Related papers (2022-04-05T11:57:43Z) - Label Assignment Distillation for Object Detection [0.0]
We come up with a simple but effective knowledge distillation approach focusing on label assignment in object detection.
Our method shows encouraging results on the MSCOCO 2017 benchmark.
arXiv Detail & Related papers (2021-09-16T10:11:58Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - Distilling Image Classifiers in Object Detectors [81.63849985128527]
We study the case of object detection and, instead of following the standard detector-to-detector distillation approach, introduce a classifier-to-detector knowledge transfer framework.
In particular, we propose strategies to exploit the classification teacher to improve both the detector's recognition accuracy and localization performance.
arXiv Detail & Related papers (2021-06-09T16:50:10Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.