A Prototype-Oriented Framework for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2110.12024v1
- Date: Fri, 22 Oct 2021 19:23:22 GMT
- Title: A Prototype-Oriented Framework for Unsupervised Domain Adaptation
- Authors: Korawat Tanwisuth, Xinjie Fan, Huangjie Zheng, Shujian Zhang, Hao
Zhang, Bo Chen, Mingyuan Zhou
- Abstract summary: We provide a memory and computation-efficient probabilistic framework to extract class prototypes and align the target features with them.
We demonstrate the general applicability of our method on a wide range of scenarios, including single-source, multi-source, class-imbalance, and source-private domain adaptation.
- Score: 52.25537670028037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing methods for unsupervised domain adaptation often rely on minimizing
some statistical distance between the source and target samples in the latent
space. To avoid the sampling variability, class imbalance, and data-privacy
concerns that often plague these methods, we instead provide a memory and
computation-efficient probabilistic framework to extract class prototypes and
align the target features with them. We demonstrate the general applicability
of our method on a wide range of scenarios, including single-source,
multi-source, class-imbalance, and source-private domain adaptation. Requiring
no additional model parameters and having a moderate increase in computation
over the source model alone, the proposed method achieves competitive
performance with state-of-the-art methods.
Related papers
- Source-Free Domain-Invariant Performance Prediction [68.39031800809553]
We propose a source-free approach centred on uncertainty-based estimation, using a generative model for calibration in the absence of source data.
Our experiments on benchmark object recognition datasets reveal that existing source-based methods fall short with limited source sample availability.
Our approach significantly outperforms the current state-of-the-art source-free and source-based methods, affirming its effectiveness in domain-invariant performance estimation.
arXiv Detail & Related papers (2024-08-05T03:18:58Z) - A Robust Negative Learning Approach to Partial Domain Adaptation Using
Source Prototypes [0.8895157045883034]
This work proposes a robust Partial Domain Adaptation (PDA) framework that mitigates the negative transfer problem.
It includes diverse, complementary label feedback, alleviating the effect of incorrect feedback and promoting pseudo-label refinement.
We conducted a series of comprehensive experiments, including an ablation analysis, covering a range of partial domain adaptation tasks.
arXiv Detail & Related papers (2023-09-07T07:26:27Z) - A principled approach to model validation in domain generalization [30.459247038765568]
We propose a novel model selection method suggesting that the validation process should account for both the classification risk and the domain discrepancy.
We validate the effectiveness of the proposed method by numerical results on several domain generalization datasets.
arXiv Detail & Related papers (2023-04-02T21:12:13Z) - Variational Model Perturbation for Source-Free Domain Adaptation [64.98560348412518]
We introduce perturbations into the model parameters by variational Bayesian inference in a probabilistic framework.
We demonstrate the theoretical connection to learning Bayesian neural networks, which proves the generalizability of the perturbed model to target domains.
arXiv Detail & Related papers (2022-10-19T08:41:19Z) - Feature Alignment by Uncertainty and Self-Training for Source-Free
Unsupervised Domain Adaptation [1.6498361958317636]
Most unsupervised domain adaptation (UDA) methods assume that labeled source images are available during model adaptation.
We propose a source-free UDA method that uses only a pre-trained source model and unlabeled target images.
Our method captures the aleatoric uncertainty by incorporating data augmentation and trains the feature generator with two consistency objectives.
arXiv Detail & Related papers (2022-08-31T14:28:36Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.