Robust Domain Generalization under Divergent Marginal and Conditional Distributions
- URL: http://arxiv.org/abs/2602.02015v1
- Date: Mon, 02 Feb 2026 12:13:41 GMT
- Title: Robust Domain Generalization under Divergent Marginal and Conditional Distributions
- Authors: Jewon Yeom, Kyubyung Chae, Hyunggyu Lim, Yoonna Oh, Dongyoon Yang, Taesup Kim,
- Abstract summary: Domain generalization aims to learn predictive models that can generalize to unseen domains.<n>We propose a unified framework for robust domain generalization under divergent marginal and conditional distributions.<n>We derive a novel risk bound for unseen domains by explicitly decomposing the joint distribution into marginal and conditional components.
- Score: 10.703095121858503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization (DG) aims to learn predictive models that can generalize to unseen domains. Most existing DG approaches focus on learning domain-invariant representations under the assumption of conditional distribution shift (i.e., primarily addressing changes in $P(X\mid Y)$ while assuming $P(Y)$ remains stable). However, real-world scenarios with multiple domains often involve compound distribution shifts where both the marginal label distribution $P(Y)$ and the conditional distribution $P(X\mid Y)$ vary simultaneously. To address this, we propose a unified framework for robust domain generalization under divergent marginal and conditional distributions. We derive a novel risk bound for unseen domains by explicitly decomposing the joint distribution into marginal and conditional components and characterizing risk gaps arising from both sources of divergence. To operationalize this bound, we design a meta-learning procedure that minimizes and validates the proposed risk bound across seen domains, ensuring strong generalization to unseen ones. Empirical evaluations demonstrate that our method achieves state-of-the-art performance not only on conventional DG benchmarks but also in challenging multi-domain long-tailed recognition settings where both marginal and conditional shifts are pronounced.
Related papers
- Guidance Not Obstruction: A Conjugate Consistent Enhanced Strategy for Domain Generalization [50.04665252665413]
We argue that acquiring discriminative generalization between classes within domains is crucial.<n>In contrast to seeking distribution alignment, we endeavor to safeguard domain-related between-class discrimination.<n>We employ a novel distribution-level Universum strategy to generate supplementary diverse domain-related class-conditional distributions.
arXiv Detail & Related papers (2024-12-13T12:25:16Z) - Domain Generalisation via Risk Distribution Matching [17.334794920092577]
We propose a novel approach for domain generalisation (DG) leveraging risk distributions to characterise domains.
In testing, we may observe similar, or potentially intensifying in magnitude, divergences between risk distributions.
We show that Risk Distribution Matching (RDM) shows superior generalisation capability over state-of-the-art DG methods.
arXiv Detail & Related papers (2023-10-28T05:23:55Z) - Moderately Distributional Exploration for Domain Generalization [32.57429594854056]
We show that MODE can endow models with provable generalization performance on unknown target domains.
experimental results show that MODE achieves competitive performance compared to state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-27T06:50:15Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Probable Domain Generalization via Quantile Risk Minimization [90.15831047587302]
Domain generalization seeks predictors which perform well on unseen test distributions.
We propose a new probabilistic framework for DG where the goal is to learn predictors that perform well with high probability.
arXiv Detail & Related papers (2022-07-20T14:41:09Z) - Instrumental Variable-Driven Domain Generalization with Unobserved
Confounders [53.735614014067394]
Domain generalization (DG) aims to learn from multiple source domains a model that can generalize well on unseen target domains.
We propose an instrumental variable-driven DG method (IV-DG) by removing the bias of the unobserved confounders with two-stage learning.
In the first stage, it learns the conditional distribution of the input features of one domain given input features of another domain.
In the second stage, it estimates the relationship by predicting labels with the learned conditional distribution.
arXiv Detail & Related papers (2021-10-04T13:32:57Z) - Discriminative Domain-Invariant Adversarial Network for Deep Domain
Generalization [33.84004077585957]
We propose a discriminative domain-invariant adversarial network (DDIAN) for domain generalization.
DDIAN achieves better prediction on unseen target data during training compared to state-of-the-art domain generalization approaches.
arXiv Detail & Related papers (2021-08-20T04:24:12Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.