Efficient Object Detection in Optical Remote Sensing Imagery via
Attention-based Feature Distillation
- URL: http://arxiv.org/abs/2310.18676v1
- Date: Sat, 28 Oct 2023 11:15:37 GMT
- Title: Efficient Object Detection in Optical Remote Sensing Imagery via
Attention-based Feature Distillation
- Authors: Pourya Shamsolmoali, Jocelyn Chanussot, Huiyu Zhou, Yue Lu
- Abstract summary: We propose Attention-based Feature Distillation (AFD) for object detection.
We introduce a multi-instance attention mechanism that effectively distinguishes between background and foreground elements.
AFD attains the performance of other state-of-the-art models while being efficient.
- Score: 29.821082433621868
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Efficient object detection methods have recently received great attention in
remote sensing. Although deep convolutional networks often have excellent
detection accuracy, their deployment on resource-limited edge devices is
difficult. Knowledge distillation (KD) is a strategy for addressing this issue
since it makes models lightweight while maintaining accuracy. However, existing
KD methods for object detection have encountered two constraints. First, they
discard potentially important background information and only distill nearby
foreground regions. Second, they only rely on the global context, which limits
the student detector's ability to acquire local information from the teacher
detector. To address the aforementioned challenges, we propose Attention-based
Feature Distillation (AFD), a new KD approach that distills both local and
global information from the teacher detector. To enhance local distillation, we
introduce a multi-instance attention mechanism that effectively distinguishes
between background and foreground elements. This approach prompts the student
detector to focus on the pertinent channels and pixels, as identified by the
teacher detector. Local distillation lacks global information, thus attention
global distillation is proposed to reconstruct the relationship between various
pixels and pass it from teacher to student detector. The performance of AFD is
evaluated on two public aerial image benchmarks, and the evaluation results
demonstrate that AFD in object detection can attain the performance of other
state-of-the-art models while being efficient.
Related papers
- Object-centric Cross-modal Feature Distillation for Event-based Object
Detection [87.50272918262361]
RGB detectors still outperform event-based detectors due to sparsity of the event data and missing visual details.
We develop a novel knowledge distillation approach to shrink the performance gap between these two modalities.
We show that object-centric distillation allows to significantly improve the performance of the event-based student object detector.
arXiv Detail & Related papers (2023-11-09T16:33:08Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - From Global to Local: Multi-scale Out-of-distribution Detection [129.37607313927458]
Out-of-distribution (OOD) detection aims to detect "unknown" data whose labels have not been seen during the in-distribution (ID) training process.
Recent progress in representation learning gives rise to distance-based OOD detection.
We propose Multi-scale OOD DEtection (MODE), a first framework leveraging both global visual information and local region details.
arXiv Detail & Related papers (2023-08-20T11:56:25Z) - Gradient-Guided Knowledge Distillation for Object Detectors [3.236217153362305]
We propose a novel approach for knowledge distillation in object detection, named Gradient-guided Knowledge Distillation (GKD)
Our GKD uses gradient information to identify and assign more weights to features that significantly impact the detection loss, allowing the student to learn the most relevant features from the teacher.
Experiments on the KITTI and COCO-Traffic datasets demonstrate our method's efficacy in knowledge distillation for object detection.
arXiv Detail & Related papers (2023-03-07T21:09:09Z) - Exploring Inconsistent Knowledge Distillation for Object Detection with
Data Augmentation [66.25738680429463]
Knowledge Distillation (KD) for object detection aims to train a compact detector by transferring knowledge from a teacher model.
We propose inconsistent knowledge distillation (IKD) which aims to distill knowledge inherent in the teacher model's counter-intuitive perceptions.
Our method outperforms state-of-the-art KD baselines on one-stage, two-stage and anchor-free object detectors.
arXiv Detail & Related papers (2022-09-20T16:36:28Z) - Localization Distillation for Object Detection [134.12664548771534]
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation instead of mimicking the classification logits.
We present a novel localization distillation (LD) method which can efficiently transfer the localization knowledge from the teacher to the student.
We show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking underperforms for years.
arXiv Detail & Related papers (2022-04-12T17:14:34Z) - Label Assignment Distillation for Object Detection [0.0]
We come up with a simple but effective knowledge distillation approach focusing on label assignment in object detection.
Our method shows encouraging results on the MSCOCO 2017 benchmark.
arXiv Detail & Related papers (2021-09-16T10:11:58Z) - Distilling Image Classifiers in Object Detectors [81.63849985128527]
We study the case of object detection and, instead of following the standard detector-to-detector distillation approach, introduce a classifier-to-detector knowledge transfer framework.
In particular, we propose strategies to exploit the classification teacher to improve both the detector's recognition accuracy and localization performance.
arXiv Detail & Related papers (2021-06-09T16:50:10Z) - SWIPENET: Object detection in noisy underwater images [41.35601054297707]
We propose a novel Sample-WeIghted hyPEr Network (SWIPENET), and a robust training paradigm named Curriculum Multi-Class Adaboost (CMA) to address these two problems.
The backbone of SWIPENET produces multiple high resolution and semantic-rich Hyper Feature Maps, which significantly improve small object detection.
Inspired by the human education process that drives the learning from easy to hard concepts, we here propose the CMA training paradigm that first trains a clean detector which is free from the influence of noisy data.
arXiv Detail & Related papers (2020-10-19T16:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.