Label Assignment Distillation for Object Detection
- URL: http://arxiv.org/abs/2109.07843v2
- Date: Sat, 18 Sep 2021 06:00:56 GMT
- Title: Label Assignment Distillation for Object Detection
- Authors: Minghao Gao, Hailun Zhang (1) and Yige Yan (2) ((1) Beijing Institute
of Technology, (2) Hohai University)
- Abstract summary: We come up with a simple but effective knowledge distillation approach focusing on label assignment in object detection.
Our method shows encouraging results on the MSCOCO 2017 benchmark.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge distillation methods are proved to be promising in improving the
performance of neural networks and no additional computational expenses are
required during the inference time. For the sake of boosting the accuracy of
object detection, a great number of knowledge distillation methods have been
proposed particularly designed for object detection. However, most of these
methods only focus on feature-level distillation and label-level distillation,
leaving the label assignment step, a unique and paramount procedure for object
detection, by the wayside. In this work, we come up with a simple but effective
knowledge distillation approach focusing on label assignment in object
detection, in which the positive and negative samples of student network are
selected in accordance with the predictions of teacher network. Our method
shows encouraging results on the MSCOCO2017 benchmark, and can not only be
applied to both one-stage detectors and two-stage detectors but also be
utilized orthogonally with other knowledge distillation methods.
Related papers
- Task Integration Distillation for Object Detectors [2.974025533366946]
We propose a knowledge distillation method that addresses both the classification and regression tasks.
We evaluate the importance of features based on the output of the detector's two sub-tasks.
This method effectively prevents the issue of biased predictions about the model's learning reality.
arXiv Detail & Related papers (2024-04-02T07:08:15Z) - Object-centric Cross-modal Feature Distillation for Event-based Object
Detection [87.50272918262361]
RGB detectors still outperform event-based detectors due to sparsity of the event data and missing visual details.
We develop a novel knowledge distillation approach to shrink the performance gap between these two modalities.
We show that object-centric distillation allows to significantly improve the performance of the event-based student object detector.
arXiv Detail & Related papers (2023-11-09T16:33:08Z) - Efficient Object Detection in Optical Remote Sensing Imagery via
Attention-based Feature Distillation [29.821082433621868]
We propose Attention-based Feature Distillation (AFD) for object detection.
We introduce a multi-instance attention mechanism that effectively distinguishes between background and foreground elements.
AFD attains the performance of other state-of-the-art models while being efficient.
arXiv Detail & Related papers (2023-10-28T11:15:37Z) - Bridging Cross-task Protocol Inconsistency for Distillation in Dense
Object Detection [19.07452370081663]
We propose a novel distillation method with cross-task consistent protocols, tailored for dense object detection.
For classification distillation, we formulate the classification logit maps in both teacher and student models as multiple binary-classification maps and applying a binary-classification distillation loss to each map.
Our proposed method is simple but effective, and experimental results demonstrate its superiority over existing methods.
arXiv Detail & Related papers (2023-08-28T03:57:37Z) - Exploring Inconsistent Knowledge Distillation for Object Detection with
Data Augmentation [66.25738680429463]
Knowledge Distillation (KD) for object detection aims to train a compact detector by transferring knowledge from a teacher model.
We propose inconsistent knowledge distillation (IKD) which aims to distill knowledge inherent in the teacher model's counter-intuitive perceptions.
Our method outperforms state-of-the-art KD baselines on one-stage, two-stage and anchor-free object detectors.
arXiv Detail & Related papers (2022-09-20T16:36:28Z) - Localization Distillation for Object Detection [134.12664548771534]
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation instead of mimicking the classification logits.
We present a novel localization distillation (LD) method which can efficiently transfer the localization knowledge from the teacher to the student.
We show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking underperforms for years.
arXiv Detail & Related papers (2022-04-12T17:14:34Z) - Label, Verify, Correct: A Simple Few Shot Object Detection Method [93.84801062680786]
We introduce a simple pseudo-labelling method to source high-quality pseudo-annotations from a training set.
We present two novel methods to improve the precision of the pseudo-labelling process.
Our method achieves state-of-the-art or second-best performance compared to existing approaches.
arXiv Detail & Related papers (2021-12-10T18:59:06Z) - Response-based Distillation for Incremental Object Detection [2.337183337110597]
Traditional object detection are ill-equipped for incremental learning.
Fine-tuning directly on a well-trained detection model with only new data will leads to catastrophic forgetting.
We propose a fully response-based incremental distillation method focusing on learning response from detection bounding boxes and classification predictions.
arXiv Detail & Related papers (2021-10-26T08:07:55Z) - Distilling Image Classifiers in Object Detectors [81.63849985128527]
We study the case of object detection and, instead of following the standard detector-to-detector distillation approach, introduce a classifier-to-detector knowledge transfer framework.
In particular, we propose strategies to exploit the classification teacher to improve both the detector's recognition accuracy and localization performance.
arXiv Detail & Related papers (2021-06-09T16:50:10Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.