Uncertainty-Guided Alignment for Unsupervised Domain Adaptation in Regression
- URL: http://arxiv.org/abs/2401.13721v3
- Date: Thu, 21 Nov 2024 15:14:29 GMT
- Title: Uncertainty-Guided Alignment for Unsupervised Domain Adaptation in Regression
- Authors: Ismail Nejjar, Gaetan Frusque, Florent Forest, Olga Fink,
- Abstract summary: Unsupervised Domain Adaptation for Regression (UDAR) aims to adapt models from a labeled source domain to an unlabeled target domain for regression tasks.
Traditional feature alignment methods, successful in classification, often prove ineffective for regression due to the correlated nature of regression features.
We propose Uncertainty-Guided Alignment (UGA), a novel method that integrates predictive uncertainty into the feature alignment process.
- Score: 5.437298646956505
- License:
- Abstract: Unsupervised Domain Adaptation for Regression (UDAR) aims to adapt models from a labeled source domain to an unlabeled target domain for regression tasks. Traditional feature alignment methods, successful in classification, often prove ineffective for regression due to the correlated nature of regression features. To address this challenge, we propose Uncertainty-Guided Alignment (UGA), a novel method that integrates predictive uncertainty into the feature alignment process. UGA employs Evidential Deep Learning to predict both target values and their associated uncertainties. This uncertainty information guides the alignment process and fuses information within the embedding space, effectively mitigating issues such as feature collapse in out-of-distribution scenarios. We evaluate UGA on two computer vision benchmarks and a real-world battery state-of-charge prediction across different manufacturers and operating temperatures. Across 52 transfer tasks, UGA on average outperforms existing state-of-the-art methods. Our approach not only improves adaptation performance but also provides well-calibrated uncertainty estimates.
Related papers
- DRIVE: Dual-Robustness via Information Variability and Entropic Consistency in Source-Free Unsupervised Domain Adaptation [10.127634263641877]
Adapting machine learning models to new domains without labeled data is a critical challenge in applications like medical imaging, autonomous driving, and remote sensing.
This task, known as Source-Free Unsupervised Domain Adaptation (SFUDA), involves adapting a pre-trained model to a target domain using only unlabeled target data.
Existing SFUDA methods often rely on single-model architectures, struggling with uncertainty and variability in the target domain.
We propose DRIVE, a novel SFUDA framework leveraging a dual-model architecture. The two models, with identical weights, work in parallel to capture diverse target domain characteristics.
arXiv Detail & Related papers (2024-11-24T20:35:04Z) - PseudoCal: A Source-Free Approach to Unsupervised Uncertainty
Calibration in Domain Adaptation [87.69789891809562]
Unsupervised domain adaptation (UDA) has witnessed remarkable advancements in improving the accuracy of models for unlabeled target domains.
The calibration of predictive uncertainty in the target domain, a crucial aspect of the safe deployment of UDA models, has received limited attention.
We propose PseudoCal, a source-free calibration method that exclusively relies on unlabeled target data.
arXiv Detail & Related papers (2023-07-14T17:21:41Z) - Adapting to Latent Subgroup Shifts via Concepts and Proxies [82.01141290360562]
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain.
For continuous observations, we propose a latent variable model specific to the data generation process at hand.
arXiv Detail & Related papers (2022-12-21T18:30:22Z) - Feature Alignment by Uncertainty and Self-Training for Source-Free
Unsupervised Domain Adaptation [1.6498361958317636]
Most unsupervised domain adaptation (UDA) methods assume that labeled source images are available during model adaptation.
We propose a source-free UDA method that uses only a pre-trained source model and unlabeled target images.
Our method captures the aleatoric uncertainty by incorporating data augmentation and trains the feature generator with two consistency objectives.
arXiv Detail & Related papers (2022-08-31T14:28:36Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive
Person Re-Identification [54.174146346387204]
We propose an approach named probabilistic uncertainty guided progressive label refinery (P$2$LR) for domain adaptive person re-identification.
A quantitative criterion is established to measure the uncertainty of pseudo labels and facilitate the network training.
Our method outperforms the baseline by 6.5% mAP on the Duke2Market task, while surpassing the state-of-the-art method by 2.5% mAP on the Market2MSMT task.
arXiv Detail & Related papers (2021-12-28T07:40:12Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z) - Certainty Volume Prediction for Unsupervised Domain Adaptation [35.984559137218504]
Unsupervised domain adaptation (UDA) deals with the problem of classifying unlabeled target domain data.
We propose a novel uncertainty-aware domain adaptation setup that models uncertainty as a multivariate Gaussian distribution in feature space.
We evaluate our proposed pipeline on challenging UDA datasets and achieve state-of-the-art results.
arXiv Detail & Related papers (2021-11-03T11:22:55Z) - Unsupervised Domain Adaptation by Uncertain Feature Alignment [29.402619219254074]
Unsupervised domain adaptation (UDA) deals with the adaptation of models from a given source domain with labeled data to an unlabeled target domain.
In this paper, we utilize the inherent prediction uncertainty of a model to accomplish the domain adaptation task.
arXiv Detail & Related papers (2020-09-14T14:42:41Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.