Self-Guided Adaptation: Progressive Representation Alignment for Domain
Adaptive Object Detection
- URL: http://arxiv.org/abs/2003.08777v2
- Date: Sun, 22 Mar 2020 09:18:10 GMT
- Title: Self-Guided Adaptation: Progressive Representation Alignment for Domain
Adaptive Object Detection
- Authors: Zongxian Li, Qixiang Ye, Chong Zhang, Jingjing Liu, Shijian Lu and
Yonghong Tian
- Abstract summary: Unsupervised domain adaptation (UDA) has achieved unprecedented success in improving the cross-domain robustness of object detection models.
Existing UDA methods largely ignore the instantaneous data distribution during model learning, which could deteriorate the feature representation given large domain shift.
We propose a Self-Guided Adaptation (SGA) model, target at aligning feature representation and transferring object detection models across domains.
- Score: 86.69077525494106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) has achieved unprecedented success in
improving the cross-domain robustness of object detection models. However,
existing UDA methods largely ignore the instantaneous data distribution during
model learning, which could deteriorate the feature representation given large
domain shift. In this work, we propose a Self-Guided Adaptation (SGA) model,
target at aligning feature representation and transferring object detection
models across domains while considering the instantaneous alignment difficulty.
The core of SGA is to calculate "hardness" factors for sample pairs indicating
domain distance in a kernel space. With the hardness factor, the proposed SGA
adaptively indicates the importance of samples and assigns them different
constrains. Indicated by hardness factors, Self-Guided Progressive Sampling
(SPS) is implemented in an "easy-to-hard" way during model adaptation. Using
multi-stage convolutional features, SGA is further aggregated to fully align
hierarchical representations of detection models. Extensive experiments on
commonly used benchmarks show that SGA improves the state-of-the-art methods
with significant margins, while demonstrating the effectiveness on large domain
shift.
Related papers
- Semi Supervised Heterogeneous Domain Adaptation via Disentanglement and Pseudo-Labelling [4.33404822906643]
Semi-supervised domain adaptation methods leverage information from a source labelled domain to generalize over a scarcely labelled target domain.
Such a setting is denoted as Semi-Supervised Heterogeneous Domain Adaptation (SSHDA)
We introduce SHeDD (Semi-supervised Heterogeneous Domain Adaptation via Disentanglement) an end-to-end neural framework tailored to learning a target domain.
arXiv Detail & Related papers (2024-06-20T08:02:49Z) - DATR: Unsupervised Domain Adaptive Detection Transformer with Dataset-Level Adaptation and Prototypical Alignment [7.768332621617199]
We introduce a strong DETR-based detector named Domain Adaptive detection TRansformer ( DATR) for unsupervised domain adaptation of object detection.
Our proposed DATR incorporates a mean-teacher based self-training framework, utilizing pseudo-labels generated by the teacher model to further mitigate domain bias.
Experiments demonstrate superior performance and generalization capabilities of our proposed DATR in multiple domain adaptation scenarios.
arXiv Detail & Related papers (2024-05-20T03:48:45Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Diffusion-based Target Sampler for Unsupervised Domain Adaptation [5.025971841729201]
Large domain shifts and the sample scarcity in the target domain make existing UDA methods achieve suboptimal performance.
We propose a plug-and-play Diffusion-based Target Sampler (DTS) to generate high fidelity and diversity pseudo target samples.
The generated samples can well simulate the data distribution of the target domain and help existing UDA methods transfer from the source domain to the target domain more easily.
arXiv Detail & Related papers (2023-03-17T02:07:43Z) - Robust Domain Adaptive Object Detection with Unified Multi-Granularity Alignment [59.831917206058435]
Domain adaptive detection aims to improve the generalization of detectors on target domain.
Recent approaches achieve domain adaption through feature alignment in different granularities via adversarial learning.
We introduce a unified multi-granularity alignment (MGA)-based detection framework for domain-invariant feature learning.
arXiv Detail & Related papers (2023-01-01T08:38:07Z) - Unsupervised Contrastive Domain Adaptation for Semantic Segmentation [75.37470873764855]
We introduce contrastive learning for feature alignment in cross-domain adaptation.
The proposed approach consistently outperforms state-of-the-art methods for domain adaptation.
It achieves 60.2% mIoU on the Cityscapes dataset.
arXiv Detail & Related papers (2022-04-18T16:50:46Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Semi-Supervised Domain Adaptation via Adaptive and Progressive Feature
Alignment [32.77436219094282]
SSDAS employs a few labeled target samples as anchors for adaptive and progressive feature alignment between labeled source samples and unlabeled target samples.
In addition, we replace the dissimilar source features by high-confidence target features continuously during the iterative training process.
Extensive experiments show the proposed SSDAS greatly outperforms a number of baselines.
arXiv Detail & Related papers (2021-06-05T09:12:50Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Learning from Scale-Invariant Examples for Domain Adaptation in Semantic
Segmentation [6.320141734801679]
We propose a novel approach of exploiting scale-invariance property of semantic segmentation model for self-supervised domain adaptation.
Our algorithm is based on a reasonable assumption that, in general, regardless of the size of the object and stuff (given context) the semantic labeling should be unchanged.
We show that this constraint is violated over the images of the target domain, and hence could be used to transfer labels in-between differently scaled patches.
arXiv Detail & Related papers (2020-07-28T19:40:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.