Robust and Accurate Object Detection via Self-Knowledge Distillation
- URL: http://arxiv.org/abs/2111.07239v1
- Date: Sun, 14 Nov 2021 04:40:15 GMT
- Title: Robust and Accurate Object Detection via Self-Knowledge Distillation
- Authors: Weipeng Xu, Pengzhi Chu, Renhao Xie, Xiongziyan Xiao, Hongcheng Huang
- Abstract summary: Unified Decoupled Feature Alignment (UDFA) is a novel fine-tuning paradigm which achieves better performance than existing methods.
We show that UDFA can surpass the standard training and state-of-the-art adversarial training methods for object detection.
- Score: 9.508466066051572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection has achieved promising performance on clean datasets, but
how to achieve better tradeoff between the adversarial robustness and clean
precision is still under-explored. Adversarial training is the mainstream
method to improve robustness, but most of the works will sacrifice clean
precision to gain robustness than standard training. In this paper, we propose
Unified Decoupled Feature Alignment (UDFA), a novel fine-tuning paradigm which
achieves better performance than existing methods, by fully exploring the
combination between self-knowledge distillation and adversarial training for
object detection. We first use decoupled fore/back-ground features to construct
self-knowledge distillation branch between clean feature representation from
pretrained detector (served as teacher) and adversarial feature representation
from student detector. Then we explore the self-knowledge distillation from a
new angle by decoupling original branch into a self-supervised learning branch
and a new self-knowledge distillation branch. With extensive experiments on the
PASCAL-VOC and MS-COCO benchmarks, the evaluation results show that UDFA can
surpass the standard training and state-of-the-art adversarial training methods
for object detection. For example, compared with teacher detector, our approach
on GFLV2 with ResNet-50 improves clean precision by 2.2 AP on PASCAL-VOC;
compared with SOTA adversarial training methods, our approach improves clean
precision by 1.6 AP, while improving adversarial robustness by 0.5 AP. Our code
will be available at https://github.com/grispeut/udfa.
Related papers
- New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes [11.694880978089852]
Adversarial Training (AT) is one of the most effective methods to enhance the robustness of DNNs.
Existing AT methods suffer from an inherent trade-off between adversarial robustness and clean accuracy.
We propose a new AT paradigm by introducing an additional dummy class for each original class.
arXiv Detail & Related papers (2024-10-16T15:36:10Z) - Large Class Separation is not what you need for Relational
Reasoning-based OOD Detection [12.578844450586]
Out-Of-Distribution (OOD) detection methods provide a solution by identifying semantic novelty.
Most of these methods leverage a learning stage on the known data, which means training (or fine-tuning) a model to capture the concept of normality.
A viable alternative is that of evaluating similarities in the embedding space produced by large pre-trained models without any further learning effort.
arXiv Detail & Related papers (2023-07-12T14:10:15Z) - DOAD: Decoupled One Stage Action Detection Network [77.14883592642782]
Localizing people and recognizing their actions from videos is a challenging task towards high-level video understanding.
Existing methods are mostly two-stage based, with one stage for person bounding box generation and the other stage for action recognition.
We present a decoupled one-stage network dubbed DOAD, to improve the efficiency for-temporal action detection.
arXiv Detail & Related papers (2023-04-01T08:06:43Z) - Deepfake Detection via Joint Unsupervised Reconstruction and Supervised
Classification [25.84902508816679]
We introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously.
This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider.
Our method achieves state-of-the-art performance on three commonly-used datasets.
arXiv Detail & Related papers (2022-11-24T05:44:26Z) - Label, Verify, Correct: A Simple Few Shot Object Detection Method [93.84801062680786]
We introduce a simple pseudo-labelling method to source high-quality pseudo-annotations from a training set.
We present two novel methods to improve the precision of the pseudo-labelling process.
Our method achieves state-of-the-art or second-best performance compared to existing approaches.
arXiv Detail & Related papers (2021-12-10T18:59:06Z) - G-DetKD: Towards General Distillation Framework for Object Detectors via
Contrastive and Semantic-guided Feature Imitation [49.421099172544196]
We propose a novel semantic-guided feature imitation technique, which automatically performs soft matching between feature pairs across all pyramid levels.
We also introduce contrastive distillation to effectively capture the information encoded in the relationship between different feature regions.
Our method consistently outperforms the existing detection KD techniques, and works when (1) components in the framework are used separately and in conjunction.
arXiv Detail & Related papers (2021-08-17T07:44:27Z) - End-to-End Semi-Supervised Object Detection with Soft Teacher [63.26266730447914]
This paper presents an end-to-end semi-supervised object detection approach, in contrast to previous more complex multi-stage methods.
The proposed approach outperforms previous methods by a large margin under various labeling ratios.
On the state-of-the-art Swin Transformer-based object detector, it can still significantly improve the detection accuracy by +1.5 mAP.
arXiv Detail & Related papers (2021-06-16T17:59:30Z) - Pseudo-IoU: Improving Label Assignment in Anchor-Free Object Detection [60.522877583407904]
Current anchor-free object detectors are quite simple and effective yet lack accurate label assignment methods.
We present Pseudo-Intersection-over-Union(Pseudo-IoU): a simple metric that brings more standardized and accurate assignment rule into anchor-free object detection frameworks.
Our method achieves comparable performance to other recent state-of-the-art anchor-free methods without bells and whistles.
arXiv Detail & Related papers (2021-04-29T02:48:47Z) - Unsupervised Pretraining for Object Detection by Patch Reidentification [72.75287435882798]
Unsupervised representation learning achieves promising performances in pre-training representations for object detectors.
This work proposes a simple yet effective representation learning method for object detection, named patch re-identification (Re-ID)
Our method significantly outperforms its counterparts on COCO in all settings, such as different training iterations and data percentages.
arXiv Detail & Related papers (2021-03-08T15:13:59Z) - Using Feature Alignment Can Improve Clean Average Precision and
Adversarial Robustness in Object Detection [11.674302325688862]
We propose that using feature alignment of intermediate layer can improve clean AP and robustness in object detection.
We conduct extensive experiments on PASCAL VOC and MS-COCO datasets to verify the effectiveness of our proposed approach.
arXiv Detail & Related papers (2020-12-08T11:54:39Z) - Self-Knowledge Distillation with Progressive Refinement of Targets [1.1470070927586016]
We propose a simple yet effective regularization method named progressive self-knowledge distillation (PS-KD)
PS-KD progressively distills a model's own knowledge to soften hard targets during training.
We show that PS-KD provides an effect of hard example mining by rescaling gradients according to difficulty in classifying examples.
arXiv Detail & Related papers (2020-06-22T04:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.