Distilling Object Detectors with Feature Richness
- URL: http://arxiv.org/abs/2111.00674v2
- Date: Tue, 2 Nov 2021 02:52:42 GMT
- Title: Distilling Object Detectors with Feature Richness
- Authors: Zhixing Du, Rui Zhang, Ming Chang, Xishan Zhang, Shaoli Liu, Tianshi
Chen, Yunji Chen
- Abstract summary: Large-scale deep models have achieved great success, but the huge computational complexity and massive storage requirements make it a great challenge to deploy them in resource-limited devices.
As a model compression and acceleration method, knowledge distillation effectively improves the performance of small models by transferring the dark knowledge from the teacher detector.
We propose a novel Feature-Richness Score (FRS) method to choose important features that improve generalized detectability during distilling.
- Score: 13.187669828065554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, large-scale deep models have achieved great success, but the
huge computational complexity and massive storage requirements make it a great
challenge to deploy them in resource-limited devices. As a model compression
and acceleration method, knowledge distillation effectively improves the
performance of small models by transferring the dark knowledge from the teacher
detector. However, most of the existing distillation-based detection methods
mainly imitating features near bounding boxes, which suffer from two
limitations. First, they ignore the beneficial features outside the bounding
boxes. Second, these methods imitate some features which are mistakenly
regarded as the background by the teacher detector. To address the above
issues, we propose a novel Feature-Richness Score (FRS) method to choose
important features that improve generalized detectability during distilling.
The proposed method effectively retrieves the important features outside the
bounding boxes and removes the detrimental features within the bounding boxes.
Extensive experiments show that our methods achieve excellent performance on
both anchor-based and anchor-free detectors. For example, RetinaNet with
ResNet-50 achieves 39.7% in mAP on the COCO2017 dataset, which even surpasses
the ResNet-101 based teacher detector 38.9% by 0.8%.
Related papers
- ESOD: Efficient Small Object Detection on High-Resolution Images [36.80623357577051]
Small objects are usually sparsely distributed and locally clustered.
Massive feature extraction computations are wasted on the non-target background area of images.
We propose to reuse the detector's backbone to conduct feature-level object-seeking and patch-slicing.
arXiv Detail & Related papers (2024-07-23T12:21:23Z) - Bridging the Gap Between End-to-End and Two-Step Text Spotting [88.14552991115207]
Bridging Text Spotting is a novel approach that resolves the error accumulation and suboptimal performance issues in two-step methods.
We demonstrate the effectiveness of the proposed method through extensive experiments.
arXiv Detail & Related papers (2024-04-06T13:14:04Z) - Object-centric Cross-modal Feature Distillation for Event-based Object
Detection [87.50272918262361]
RGB detectors still outperform event-based detectors due to sparsity of the event data and missing visual details.
We develop a novel knowledge distillation approach to shrink the performance gap between these two modalities.
We show that object-centric distillation allows to significantly improve the performance of the event-based student object detector.
arXiv Detail & Related papers (2023-11-09T16:33:08Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Efficient Visual Fault Detection for Freight Train Braking System via
Heterogeneous Self Distillation in the Wild [8.062167870951706]
This paper proposes a heterogeneous self-distillation framework to ensure detection accuracy and speed.
We employ a novel loss function that makes the network easily concentrate on values near the label to improve learning efficiency.
Our framework can achieve over 37 frames per second and maintain the highest accuracy in comparison with traditional distillation approaches.
arXiv Detail & Related papers (2023-07-03T01:27:39Z) - Gradient-Guided Knowledge Distillation for Object Detectors [3.236217153362305]
We propose a novel approach for knowledge distillation in object detection, named Gradient-guided Knowledge Distillation (GKD)
Our GKD uses gradient information to identify and assign more weights to features that significantly impact the detection loss, allowing the student to learn the most relevant features from the teacher.
Experiments on the KITTI and COCO-Traffic datasets demonstrate our method's efficacy in knowledge distillation for object detection.
arXiv Detail & Related papers (2023-03-07T21:09:09Z) - ReDFeat: Recoupling Detection and Description for Multimodal Feature
Learning [51.07496081296863]
We recouple independent constraints of detection and description of multimodal feature learning with a mutual weighting strategy.
We propose a detector that possesses a large receptive field and is equipped with learnable non-maximum suppression layers.
We build a benchmark that contains cross visible, infrared, near-infrared and synthetic aperture radar image pairs for evaluating the performance of features in feature matching and image registration tasks.
arXiv Detail & Related papers (2022-05-16T04:24:22Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - Distilling Object Detectors via Decoupled Features [69.62967325617632]
We present a novel distillation algorithm via decoupled features (DeFeat) for learning a better student detector.
Experiments on various detectors with different backbones show that the proposed DeFeat is able to surpass the state-of-the-art distillation methods for object detection.
arXiv Detail & Related papers (2021-03-26T13:58:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.