Joint covariate-alignment and concept-alignment: a framework for domain
generalization
- URL: http://arxiv.org/abs/2208.00898v1
- Date: Mon, 1 Aug 2022 14:39:35 GMT
- Title: Joint covariate-alignment and concept-alignment: a framework for domain
generalization
- Authors: Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, and
Shuchin Aeron
- Abstract summary: We propose a novel domain generalization framework based on a new upper bound to the risk on the unseen domain.
Our numerical results show that the proposed methods perform as well as or better than the state-of-the-art for domain generalization on several data sets.
- Score: 28.391072289529053
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, we propose a novel domain generalization (DG) framework based
on a new upper bound to the risk on the unseen domain. Particularly, our
framework proposes to jointly minimize both the covariate-shift as well as the
concept-shift between the seen domains for a better performance on the unseen
domain. While the proposed approach can be implemented via an arbitrary
combination of covariate-alignment and concept-alignment modules, in this work
we use well-established approaches for distributional alignment namely, Maximum
Mean Discrepancy (MMD) and covariance Alignment (CORAL), and use an Invariant
Risk Minimization (IRM)-based approach for concept alignment. Our numerical
results show that the proposed methods perform as well as or better than the
state-of-the-art for domain generalization on several data sets.
Related papers
- Robust Domain Generalization under Divergent Marginal and Conditional Distributions [10.703095121858503]
Domain generalization aims to learn predictive models that can generalize to unseen domains.<n>We propose a unified framework for robust domain generalization under divergent marginal and conditional distributions.<n>We derive a novel risk bound for unseen domains by explicitly decomposing the joint distribution into marginal and conditional components.
arXiv Detail & Related papers (2026-02-02T12:13:41Z) - Synergy over Discrepancy: A Partition-Based Approach to Multi-Domain LLM Fine-Tuning [9.97195966127976]
Large language models (LLMs) demonstrate impressive generalization abilities, yet adapting them effectively across multiple heterogeneous domains remains challenging.<n>We propose a partition-based multi-stage fine-tuning framework designed to exploit inter-domain synergies while minimizing negative transfer.<n>Our approach strategically partitions domains into subsets (stages) by balancing domain discrepancy, synergy, and model capacity constraints.
arXiv Detail & Related papers (2025-11-10T15:27:26Z) - Group-wise Scaling and Orthogonal Decomposition for Domain-Invariant Feature Extraction in Face Anti-Spoofing [7.902884193437407]
We propose a novel DGFAS framework that jointly aligns weights and biases through Feature Orthogonal Decomposition (FOD) and Group-wise Scaling Risk Minimization (GS-RM)<n>Our approach achieves state-of-the-art performance, consistently improving accuracy, reducing bias misalignment, and enhancing stability on unseen target domains.
arXiv Detail & Related papers (2025-07-05T11:20:19Z) - Moment Alignment: Unifying Gradient and Hessian Matching for Domain Generalization [13.021311628351423]
Domain generalization (DG) seeks to develop models that generalize well to unseen target domains.<n>One line of research in DG focuses on aligning domain-level gradients and Hessians to enhance generalization.<n>We introduce textbfClosed-Form textbfMoment textbfAlignment (CMA), a novel DG algorithm that aligns domain-level gradients and Hessians in closed-form.
arXiv Detail & Related papers (2025-06-09T02:51:36Z) - From Deterministic to Probabilistic: A Novel Perspective on Domain Generalization for Medical Image Segmentation [1.93061220186624]
We propose an innovative framework that enhances data representation quality through probabilistic modeling and contrastive learning.
Specifically, we combine deterministic features with uncertainty modeling to capture comprehensive feature distributions.
We show that the proposed framework significantly improves segmentation performance, providing a robust solution to domain generalization challenges in medical image segmentation.
arXiv Detail & Related papers (2024-12-07T07:41:04Z) - Domain Generalisation via Risk Distribution Matching [17.334794920092577]
We propose a novel approach for domain generalisation (DG) leveraging risk distributions to characterise domains.
In testing, we may observe similar, or potentially intensifying in magnitude, divergences between risk distributions.
We show that Risk Distribution Matching (RDM) shows superior generalisation capability over state-of-the-art DG methods.
arXiv Detail & Related papers (2023-10-28T05:23:55Z) - Conditional Support Alignment for Domain Adaptation with Label Shift [8.819673391477034]
Unlabelled domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on labeled samples on the source domain and unsupervised ones in the target domain.
We propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions.
arXiv Detail & Related papers (2023-05-29T05:20:18Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.