Unsupervised Domain Adaptation Based on the Predictive Uncertainty of
Models
- URL: http://arxiv.org/abs/2211.08866v1
- Date: Wed, 16 Nov 2022 12:23:32 GMT
- Title: Unsupervised Domain Adaptation Based on the Predictive Uncertainty of
Models
- Authors: JoonHo Lee, Gyemin Lee
- Abstract summary: Unsupervised domain adaptation (UDA) aims to improve the prediction performance in the target domain under distribution shifts from the source domain.
We present a novel UDA method that learns domain-invariant features that minimize the domain divergence.
- Score: 1.6498361958317636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) aims to improve the prediction
performance in the target domain under distribution shifts from the source
domain. The key principle of UDA is to minimize the divergence between the
source and the target domains. To follow this principle, many methods employ a
domain discriminator to match the feature distributions. Some recent methods
evaluate the discrepancy between two predictions on target samples to detect
those that deviate from the source distribution. However, their performance is
limited because they either match the marginal distributions or measure the
divergence conservatively. In this paper, we present a novel UDA method that
learns domain-invariant features that minimize the domain divergence. We
propose model uncertainty as a measure of the domain divergence. Our UDA method
based on model uncertainty (MUDA) adopts a Bayesian framework and provides an
efficient way to evaluate model uncertainty by means of Monte Carlo dropout
sampling. Empirical results on image recognition tasks show that our method is
superior to existing state-of-the-art methods. We also extend MUDA to
multi-source domain adaptation problems.
Related papers
- Unsupervised Domain Adaptation via Domain-Adaptive Diffusion [31.802163238282343]
Unsupervised Domain Adaptation (UDA) is quite challenging due to the large distribution discrepancy between the source domain and the target domain.
Inspired by diffusion models which have strong capability to gradually convert data distributions across a large gap, we consider to explore the diffusion technique to handle the challenging UDA task.
Our method outperforms the current state-of-the-arts by a large margin on three widely used UDA datasets.
arXiv Detail & Related papers (2023-08-26T14:28:18Z) - Distributionally Robust Domain Adaptation [12.02023514105999]
Domain Adaptation (DA) has recently received significant attention due to its potential to adapt a learning model across source and target domains with mismatched distributions.
In this paper, we propose DRDA, a distributionally robust domain adaptation method.
arXiv Detail & Related papers (2022-10-30T17:29:22Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Deep Least Squares Alignment for Unsupervised Domain Adaptation [6.942003070153651]
Unsupervised domain adaptation leverages rich information from a labeled source domain to model an unlabeled target domain.
We propose deep least squares alignment (DLSA) to estimate the distribution of the two domains in a latent space by parameterizing a linear model.
Extensive experiments demonstrate that the proposed DLSA model is effective in aligning domain distributions and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-11-03T13:23:06Z) - Unsupervised BatchNorm Adaptation (UBNA): A Domain Adaptation Method for
Semantic Segmentation Without Using Source Domain Representations [35.586031601299034]
Unsupervised BatchNorm Adaptation (UBNA) adapts a given pre-trained model to an unseen target domain.
We partially adapt the normalization layer statistics to the target domain using an exponentially decaying momentum factor.
Compared to standard UDA approaches we report a trade-off between performance and usage of source domain representations.
arXiv Detail & Related papers (2020-11-17T08:37:40Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Few-shot Domain Adaptation by Causal Mechanism Transfer [107.08605582020866]
We study few-shot supervised domain adaptation (DA) for regression problems, where only a few labeled target domain data and many labeled source domain data are available.
Many of the current DA methods base their transfer assumptions on either parametrized distribution shift or apparent distribution similarities.
We propose mechanism transfer, a meta-distributional scenario in which a data generating mechanism is invariant among domains.
arXiv Detail & Related papers (2020-02-10T02:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.