Uncertainty-Guided Alignment for Unsupervised Domain Adaptation in
Regression
- URL: http://arxiv.org/abs/2401.13721v2
- Date: Fri, 26 Jan 2024 10:59:54 GMT
- Title: Uncertainty-Guided Alignment for Unsupervised Domain Adaptation in
Regression
- Authors: Ismail Nejjar, Gaetan Frusque, Florent Forest, Olga Fink
- Abstract summary: Unsupervised Domain Adaptation for Regression aims to adapt a model from a labeled source domain to an unlabeled target domain for regression tasks.
Recent successful works in UDAR mostly focus on subspace alignment, involving the alignment of a selected subspace within the entire feature space.
We propose an effective method for UDAR by incorporating guidance from uncertainty.
- Score: 5.939858158928473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation for Regression (UDAR) aims to adapt a model
from a labeled source domain to an unlabeled target domain for regression
tasks. Recent successful works in UDAR mostly focus on subspace alignment,
involving the alignment of a selected subspace within the entire feature space.
This contrasts with the feature alignment methods used for classification,
which aim at aligning the entire feature space and have proven effective but
are less so in regression settings. Specifically, while classification aims to
identify separate clusters across the entire embedding dimension, regression
induces less structure in the data representation, necessitating additional
guidance for efficient alignment. In this paper, we propose an effective method
for UDAR by incorporating guidance from uncertainty. Our approach serves a dual
purpose: providing a measure of confidence in predictions and acting as a
regularization of the embedding space. Specifically, we leverage the Deep
Evidential Learning framework, which outputs both predictions and uncertainties
for each input sample. We propose aligning the parameters of higher-order
evidential distributions between the source and target domains using
traditional alignment methods at the feature or posterior level. Additionally,
we propose to augment the feature space representation by mixing source samples
with pseudo-labeled target samples based on label similarity. This cross-domain
mixing strategy produces more realistic samples than random mixing and
introduces higher uncertainty, facilitating further alignment. We demonstrate
the effectiveness of our approach on four benchmarks for UDAR, on which we
outperform existing methods.
Related papers
- DRIVE: Dual-Robustness via Information Variability and Entropic Consistency in Source-Free Unsupervised Domain Adaptation [10.127634263641877]
Adapting machine learning models to new domains without labeled data is a critical challenge in applications like medical imaging, autonomous driving, and remote sensing.
This task, known as Source-Free Unsupervised Domain Adaptation (SFUDA), involves adapting a pre-trained model to a target domain using only unlabeled target data.
Existing SFUDA methods often rely on single-model architectures, struggling with uncertainty and variability in the target domain.
We propose DRIVE, a novel SFUDA framework leveraging a dual-model architecture. The two models, with identical weights, work in parallel to capture diverse target domain characteristics.
arXiv Detail & Related papers (2024-11-24T20:35:04Z) - PseudoCal: A Source-Free Approach to Unsupervised Uncertainty
Calibration in Domain Adaptation [87.69789891809562]
Unsupervised domain adaptation (UDA) has witnessed remarkable advancements in improving the accuracy of models for unlabeled target domains.
The calibration of predictive uncertainty in the target domain, a crucial aspect of the safe deployment of UDA models, has received limited attention.
We propose PseudoCal, a source-free calibration method that exclusively relies on unlabeled target data.
arXiv Detail & Related papers (2023-07-14T17:21:41Z) - Adapting to Latent Subgroup Shifts via Concepts and Proxies [82.01141290360562]
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain.
For continuous observations, we propose a latent variable model specific to the data generation process at hand.
arXiv Detail & Related papers (2022-12-21T18:30:22Z) - Feature Alignment by Uncertainty and Self-Training for Source-Free
Unsupervised Domain Adaptation [1.6498361958317636]
Most unsupervised domain adaptation (UDA) methods assume that labeled source images are available during model adaptation.
We propose a source-free UDA method that uses only a pre-trained source model and unlabeled target images.
Our method captures the aleatoric uncertainty by incorporating data augmentation and trains the feature generator with two consistency objectives.
arXiv Detail & Related papers (2022-08-31T14:28:36Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive
Person Re-Identification [54.174146346387204]
We propose an approach named probabilistic uncertainty guided progressive label refinery (P$2$LR) for domain adaptive person re-identification.
A quantitative criterion is established to measure the uncertainty of pseudo labels and facilitate the network training.
Our method outperforms the baseline by 6.5% mAP on the Duke2Market task, while surpassing the state-of-the-art method by 2.5% mAP on the Market2MSMT task.
arXiv Detail & Related papers (2021-12-28T07:40:12Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z) - Certainty Volume Prediction for Unsupervised Domain Adaptation [35.984559137218504]
Unsupervised domain adaptation (UDA) deals with the problem of classifying unlabeled target domain data.
We propose a novel uncertainty-aware domain adaptation setup that models uncertainty as a multivariate Gaussian distribution in feature space.
We evaluate our proposed pipeline on challenging UDA datasets and achieve state-of-the-art results.
arXiv Detail & Related papers (2021-11-03T11:22:55Z) - Unsupervised Domain Adaptation by Uncertain Feature Alignment [29.402619219254074]
Unsupervised domain adaptation (UDA) deals with the adaptation of models from a given source domain with labeled data to an unlabeled target domain.
In this paper, we utilize the inherent prediction uncertainty of a model to accomplish the domain adaptation task.
arXiv Detail & Related papers (2020-09-14T14:42:41Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.