Incorporating Pre-training Data Matters in Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2308.03097v1
- Date: Sun, 6 Aug 2023 12:23:40 GMT
- Title: Incorporating Pre-training Data Matters in Unsupervised Domain
Adaptation
- Authors: Yinsong Xu, Aidong Men, Yang Liu, Qingchao Chen
- Abstract summary: Unsupervised domain adaptation (UDA) and Source-free UDA(SFUDA) methods formulate the problem involving two domains: source and target.
We investigate the correlation among ImageNet, the source, and the target domain.
We present a novel framework TriDA which preserves the semantic structure of the pre-train dataset during fine-tuning.
- Score: 13.509286043322442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation(UDA) and Source-free UDA(SFUDA) methods
formulate the problem involving two domains: source and target. They typically
employ a standard training approach that begins with models pre-trained on
large-scale datasets e.g., ImageNet, while rarely discussing its effect.
Recognizing this gap, we investigate the following research questions: (1) What
is the correlation among ImageNet, the source, and the target domain? (2) How
does pre-training on ImageNet influence the target risk? To answer the first
question, we empirically observed an interesting Spontaneous Pulling (SP)
Effect in fine-tuning where the discrepancies between any two of the three
domains (ImageNet, Source, Target) decrease but at the cost of the impaired
semantic structure of the pre-train domain. For the second question, we put
forward a theory to explain SP and quantify that the target risk is bound by
gradient disparities among the three domains. Our observations reveal a key
limitation of existing methods: it hinders the adaptation performance if the
semantic cluster structure of the pre-train dataset (i.e.ImageNet) is impaired.
To address it, we incorporate ImageNet as the third domain and redefine the
UDA/SFUDA as a three-player game. Specifically, inspired by the theory and
empirical findings, we present a novel framework termed TriDA which
additionally preserves the semantic structure of the pre-train dataset during
fine-tuning. Experimental results demonstrate that it achieves state-of-the-art
performance across various UDA and SFUDA benchmarks.
Related papers
- CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D
Object Detection [14.063365469339812]
LiDAR-based 3D Object Detection methods often do not generalize well to target domains outside the source (or training) data distribution.
We introduce a novel unsupervised domain adaptation (UDA) method, called CMDA, which leverages visual semantic cues from an image modality.
We also introduce a self-training-based learning strategy, wherein a model is adversarially trained to generate domain-invariant features.
arXiv Detail & Related papers (2024-03-06T14:12:38Z) - Unsupervised Adaptation of Polyp Segmentation Models via Coarse-to-Fine
Self-Supervision [16.027843524655516]
We study a practical problem of Source-Free Domain Adaptation (SFDA), which eliminates the reliance on annotated source data.
Current SFDA methods focus on extracting domain knowledge from the source-trained model but neglects the intrinsic structure of the target domain.
We propose a new SFDA framework, called Region-to-Pixel Adaptation Network(RPANet), which learns the region-level and pixel-level discriminative representations through coarse-to-fine self-supervision.
arXiv Detail & Related papers (2023-08-13T02:37:08Z) - Domain Adaptive and Generalizable Network Architectures and Training
Strategies for Semantic Image Segmentation [108.33885637197614]
Unsupervised domain adaptation (UDA) and domain generalization (DG) enable machine learning models trained on a source domain to perform well on unlabeled or unseen target domains.
We propose HRDA, a multi-resolution framework for UDA&DG, that combines the strengths of small high-resolution crops to preserve fine segmentation details and large low-resolution crops to capture long-range context dependencies with a learned scale attention.
arXiv Detail & Related papers (2023-04-26T15:18:45Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Domain-aware Triplet loss in Domain Generalization [0.0]
Domain shift is caused by discrepancies in the distributions of the testing and training data.
We design a domainaware triplet loss for domain generalization to help the model to cluster similar semantic features.
Our algorithm is designed to disperse domain information in the embedding space.
arXiv Detail & Related papers (2023-03-01T14:02:01Z) - Domain Adaptation with Adversarial Training on Penultimate Activations [82.9977759320565]
Enhancing model prediction confidence on unlabeled target data is an important objective in Unsupervised Domain Adaptation (UDA)
We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features.
arXiv Detail & Related papers (2022-08-26T19:50:46Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Unsupervised Domain Adaptation for Monocular 3D Object Detection via
Self-Training [57.25828870799331]
We propose STMono3D, a new self-teaching framework for unsupervised domain adaptation on Mono3D.
We develop a teacher-student paradigm to generate adaptive pseudo labels on the target domain.
STMono3D achieves remarkable performance on all evaluated datasets and even surpasses fully supervised results on the KITTI 3D object detection dataset.
arXiv Detail & Related papers (2022-04-25T12:23:07Z) - Domain Adaptation for Real-World Single View 3D Reconstruction [1.611271868398988]
unsupervised domain adaptation can be used to transfer knowledge from the labeled synthetic source domain to the unlabeled real target domain.
We propose a novel architecture which takes advantage of the fact that in this setting, target domain data is unsupervised with regards to the 3D model but supervised for class labels.
Results are performed with ShapeNet as the source domain and domains within the Object Domain Suite (ODDS) dataset as the target.
arXiv Detail & Related papers (2021-08-24T22:02:27Z) - Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency [90.71745178767203]
Deep learning-based 3D object detection has achieved unprecedented success with the advent of large-scale autonomous driving datasets.
Existing 3D domain adaptive detection methods often assume prior access to the target domain annotations, which is rarely feasible in the real world.
We study a more realistic setting, unsupervised 3D domain adaptive detection, which only utilizes source domain annotations.
arXiv Detail & Related papers (2021-07-23T17:19:23Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.