Cross-Domain Object Detection via Adaptive Self-Training
- URL: http://arxiv.org/abs/2111.13216v1
- Date: Thu, 25 Nov 2021 18:50:15 GMT
- Title: Cross-Domain Object Detection via Adaptive Self-Training
- Authors: Yu-Jhe Li, Xiaoliang Dai, Chih-Yao Ma, Yen-Cheng Liu, Kan Chen, Bichen
Wu, Zijian He, Kris Kitani, Peter Vadja
- Abstract summary: We propose a self-training framework called Adaptive Unbiased Teacher (AUT)
AUT uses adversarial learning and weak-strong data augmentation to address domain shift.
We show AUT demonstrates superiority over all existing approaches and even Oracle (fully supervised) models by a large margin.
- Score: 45.48690932743266
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We tackle the problem of domain adaptation in object detection, where there
is a significant domain shift between a source (a domain with supervision) and
a target domain (a domain of interest without supervision). As a widely adopted
domain adaptation method, the self-training teacher-student framework (a
student model learns from pseudo labels generated from a teacher model) has
yielded remarkable accuracy gain on the target domain. However, it still
suffers from the large amount of low-quality pseudo labels (e.g., false
positives) generated from the teacher due to its bias toward the source domain.
To address this issue, we propose a self-training framework called Adaptive
Unbiased Teacher (AUT) leveraging adversarial learning and weak-strong data
augmentation during mutual learning to address domain shift. Specifically, we
employ feature-level adversarial training in the student model, ensuring
features extracted from the source and target domains share similar statistics.
This enables the student model to capture domain-invariant features.
Furthermore, we apply weak-strong augmentation and mutual learning between the
teacher model on the target domain and the student model on both domains. This
enables the teacher model to gradually benefit from the student model without
suffering domain shift. We show that AUT demonstrates superiority over all
existing approaches and even Oracle (fully supervised) models by a large
margin. For example, we achieve 50.9% (49.3%) mAP on Foggy Cityscape
(Clipart1K), which is 9.2% (5.2%) and 8.2% (11.0%) higher than previous
state-of-the-art and Oracle, respectively
Related papers
- Context Aware Grounded Teacher for Source Free Object Detection [0.0]
We focus on the Source Free Object Detection (SFOD) problem, when source data is unavailable during adaptation.
Several approaches have leveraged a semi-supervised student-teacher architecture to bridge domain discrepancy.
We introduce Grounded Teacher (GT) as a standard framework to tackle the problem of context bias.
arXiv Detail & Related papers (2025-04-21T19:13:33Z) - Cross-Domain Policy Adaptation by Capturing Representation Mismatch [53.087413751430255]
It is vital to learn effective policies that can be transferred to different domains with dynamics discrepancies in reinforcement learning (RL)
In this paper, we consider dynamics adaptation settings where there exists dynamics mismatch between the source domain and the target domain.
We perform representation learning only in the target domain and measure the representation deviations on the transitions from the source domain.
arXiv Detail & Related papers (2024-05-24T09:06:12Z) - Contrastive Mean Teacher for Domain Adaptive Object Detectors [20.06919799819326]
Mean-teacher self-training is a powerful paradigm in unsupervised domain adaptation for object detection, but it struggles with low-quality pseudo-labels.
We propose Contrastive Mean Teacher (CMT) -- a unified, general-purpose framework with the two paradigms naturally integrated to maximize beneficial learning signals.
CMT leads to new state-of-the-art target-domain performance: 51.9% mAP on Foggy Cityscapes, outperforming the previously best by 2.1% mAP.
arXiv Detail & Related papers (2023-05-04T17:55:17Z) - Focus on Your Target: A Dual Teacher-Student Framework for
Domain-adaptive Semantic Segmentation [210.46684938698485]
We study unsupervised domain adaptation (UDA) for semantic segmentation.
We find that, by decreasing/increasing the proportion of training samples from the target domain, the 'learning ability' is strengthened/weakened.
We propose a novel dual teacher-student (DTS) framework and equip it with a bidirectional learning strategy.
arXiv Detail & Related papers (2023-03-16T05:04:10Z) - Multilevel Knowledge Transfer for Cross-Domain Object Detection [26.105283273950942]
Domain shift is a well known problem where a model trained on a particular domain (source) does not perform well when exposed to samples from a different domain (target)
In this work, we address the domain shift problem for the object detection task.
Our approach relies on gradually removing the domain shift between the source and the target domains.
arXiv Detail & Related papers (2021-08-02T15:24:40Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation [91.58443042554903]
We propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)
PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains.
Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
arXiv Detail & Related papers (2021-03-31T02:07:42Z) - FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation [26.929772844572213]
We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain.
We train the source-dominant model and the target-dominant model that have complementary characteristics.
Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain.
arXiv Detail & Related papers (2020-11-18T11:58:19Z) - Teacher-Student Consistency For Multi-Source Domain Adaptation [28.576613317253035]
In Multi-Source Domain Adaptation (MSDA), models are trained on samples from multiple source domains and used for inference on a different, target, domain.
We propose Multi-source Student Teacher (MUST), a novel procedure designed to alleviate these issues.
arXiv Detail & Related papers (2020-10-20T06:17:40Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.