Learning Domain Adaptive Object Detection with Probabilistic Teacher
- URL: http://arxiv.org/abs/2206.06293v1
- Date: Mon, 13 Jun 2022 16:24:22 GMT
- Title: Learning Domain Adaptive Object Detection with Probabilistic Teacher
- Authors: Meilin Chen, Weijie Chen, Shicai Yang, Jie Song, Xinchao Wang, Lei
Zhang, Yunfeng Yan, Donglian Qi, Yueting Zhuang, Di Xie, Shiliang Pu
- Abstract summary: We present a simple yet effective framework, termed as Probabilistic Teacher (PT)
PT aims to capture the uncertainty of unlabeled target data from a gradually evolving teacher and guides the learning of a student in a mutually beneficial manner.
We also present a novel Entropy Focal Loss (EFL) to further facilitate the uncertainty-guided self-training.
- Score: 93.76128726257946
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-training for unsupervised domain adaptive object detection is a
challenging task, of which the performance depends heavily on the quality of
pseudo boxes. Despite the promising results, prior works have largely
overlooked the uncertainty of pseudo boxes during self-training. In this paper,
we present a simple yet effective framework, termed as Probabilistic Teacher
(PT), which aims to capture the uncertainty of unlabeled target data from a
gradually evolving teacher and guides the learning of a student in a mutually
beneficial manner. Specifically, we propose to leverage the uncertainty-guided
consistency training to promote classification adaptation and localization
adaptation, rather than filtering pseudo boxes via an elaborate confidence
threshold. In addition, we conduct anchor adaptation in parallel with
localization adaptation, since anchor can be regarded as a learnable parameter.
Together with this framework, we also present a novel Entropy Focal Loss (EFL)
to further facilitate the uncertainty-guided self-training. Equipped with EFL,
PT outperforms all previous baselines by a large margin and achieve new
state-of-the-arts.
Related papers
- Efficient Test-Time Prompt Tuning for Vision-Language Models [41.90997623029582]
Self-TPT is a framework leveraging Self-supervised learning for efficient Test-time Prompt Tuning.
We show that Self-TPT not only significantly reduces inference costs but also achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-08-11T13:55:58Z) - Adaptive Cascading Network for Continual Test-Time Adaptation [12.718826132518577]
We study the problem of continual test-time adaption where the goal is to adapt a source pre-trained model to a sequence of unlabelled target domains at test time.
Existing methods on test-time training suffer from several limitations.
arXiv Detail & Related papers (2024-07-17T01:12:57Z) - Cross Domain Object Detection via Multi-Granularity Confidence Alignment based Mean Teacher [14.715398100791559]
Cross domain object detection learns an object detector for an unlabeled target domain by transferring knowledge from an annotated source domain.
In this study, we find that confidence misalignment of the predictions, including category-level overconfidence, instance-level task confidence inconsistency, and image-level confidence misfocusing, will bring suboptimal performance on the target domain.
arXiv Detail & Related papers (2024-07-10T15:56:24Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Domain Adaptation with Adversarial Training on Penultimate Activations [82.9977759320565]
Enhancing model prediction confidence on unlabeled target data is an important objective in Unsupervised Domain Adaptation (UDA)
We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features.
arXiv Detail & Related papers (2022-08-26T19:50:46Z) - Energy-constrained Self-training for Unsupervised Domain Adaptation [25.594991545790638]
Unsupervised domain adaptation (UDA) aims to transfer the knowledge on a labeled source domain distribution to perform well on an unlabeled target domain.
Recently, the deep self-training involves an iterative process of predicting on the target domain and then taking the confident predictions as hard pseudo-labels for retraining.
In this paper, we resort to the energy-based model and constrain the training of the unlabeled target sample with the energy function minimization objective.
arXiv Detail & Related papers (2021-01-01T21:02:18Z) - Two-phase Pseudo Label Densification for Self-training based Domain
Adaptation [93.03265290594278]
We propose a novel Two-phase Pseudo Label Densification framework, referred to as TPLD.
In the first phase, we use sliding window voting to propagate the confident predictions, utilizing intrinsic spatial-correlations in the images.
In the second phase, we perform a confidence-based easy-hard classification.
To ease the training process and avoid noisy predictions, we introduce the bootstrapping mechanism to the original self-training loss.
arXiv Detail & Related papers (2020-12-09T02:35:25Z) - Unsupervised Domain Adaptation for Speech Recognition via Uncertainty
Driven Self-Training [55.824641135682725]
Domain adaptation experiments using WSJ as a source domain and TED-LIUM 3 as well as SWITCHBOARD show that up to 80% of the performance of a system trained on ground-truth data can be recovered.
arXiv Detail & Related papers (2020-11-26T18:51:26Z) - Optimal Change-Point Detection with Training Sequences in the Large and
Moderate Deviations Regimes [72.68201611113673]
This paper investigates a novel offline change-point detection problem from an information-theoretic perspective.
We assume that the knowledge of the underlying pre- and post-change distributions are not known and can only be learned from the training sequences which are available.
arXiv Detail & Related papers (2020-03-13T23:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.