Mitigating Both Covariate and Conditional Shift for Domain
Generalization
- URL: http://arxiv.org/abs/2209.08253v1
- Date: Sat, 17 Sep 2022 05:13:56 GMT
- Title: Mitigating Both Covariate and Conditional Shift for Domain
Generalization
- Authors: Jianxin Lin, Yongqiang Tang, Junping Wang and Wensheng Zhang
- Abstract summary: Domain generalization (DG) aims to learn a model on several source domains, hoping that the model can generalize well to unseen target domains.
In this paper, a novel DG method is proposed to deal with the distribution shift via Visual Alignment and Uncertainty-guided belief Ensemble (VAUE)
- Score: 14.91361835243516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization (DG) aims to learn a model on several source domains,
hoping that the model can generalize well to unseen target domains. The
distribution shift between domains contains the covariate shift and conditional
shift, both of which the model must be able to handle for better
generalizability. In this paper, a novel DG method is proposed to deal with the
distribution shift via Visual Alignment and Uncertainty-guided belief Ensemble
(VAUE). Specifically, for the covariate shift, a visual alignment module is
designed to align the distribution of image style to a common empirical
Gaussian distribution so that the covariate shift can be eliminated in the
visual space. For the conditional shift, we adopt an uncertainty-guided belief
ensemble strategy based on the subjective logic and Dempster-Shafer theory. The
conditional distribution given a test sample is estimated by the dynamic
combination of that of source domains. Comprehensive experiments are conducted
to demonstrate the superior performance of the proposed method on four widely
used datasets, i.e., Office-Home, VLCS, TerraIncognita, and PACS.
Related papers
- A Generalized Label Shift Perspective for Cross-Domain Gaze Estimation [27.20591585252664]
Cross-domain Gaze Estimation (CDGE) is developed for real-world application scenarios.<n>We introduce a novel Generalized Label Shift (GLS) perspective to CDGE.<n>To embed the reweighted source distribution to conditional invariant learning, we derive a probability-aware estimation of conditional operator discrepancy.
arXiv Detail & Related papers (2025-05-19T12:33:52Z) - Guidance Not Obstruction: A Conjugate Consistent Enhanced Strategy for Domain Generalization [50.04665252665413]
We argue that acquiring discriminative generalization between classes within domains is crucial.
In contrast to seeking distribution alignment, we endeavor to safeguard domain-related between-class discrimination.
We employ a novel distribution-level Universum strategy to generate supplementary diverse domain-related class-conditional distributions.
arXiv Detail & Related papers (2024-12-13T12:25:16Z) - Generalize or Detect? Towards Robust Semantic Segmentation Under Multiple Distribution Shifts [56.57141696245328]
In open-world scenarios, where both novel classes and domains may exist, an ideal segmentation model should detect anomaly classes for safety.
Existing methods often struggle to distinguish between domain-level and semantic-level distribution shifts.
arXiv Detail & Related papers (2024-11-06T11:03:02Z) - Proxy Methods for Domain Adaptation [78.03254010884783]
proxy variables allow for adaptation to distribution shift without explicitly recovering or modeling latent variables.
We develop a two-stage kernel estimation approach to adapt to complex distribution shifts in both settings.
arXiv Detail & Related papers (2024-03-12T09:32:41Z) - Invariant Anomaly Detection under Distribution Shifts: A Causal
Perspective [6.845698872290768]
Anomaly detection (AD) is the machine learning task of identifying highly discrepant abnormal samples.
Under the constraints of a distribution shift, the assumption that training samples and test samples are drawn from the same distribution breaks down.
We attempt to increase the resilience of anomaly detection models to different kinds of distribution shifts.
arXiv Detail & Related papers (2023-12-21T23:20:47Z) - Cross Contrasting Feature Perturbation for Domain Generalization [11.863319505696184]
Domain generalization aims to learn a robust model from source domains that generalize well on unseen target domains.
Recent studies focus on generating novel domain samples or features to diversify distributions complementary to source domains.
We propose an online one-stage Cross Contrasting Feature Perturbation framework to simulate domain shift.
arXiv Detail & Related papers (2023-07-24T03:27:41Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.