Barycentric-alignment and reconstruction loss minimization for domain
generalization
- URL: http://arxiv.org/abs/2109.01902v6
- Date: Sun, 21 May 2023 21:44:51 GMT
- Title: Barycentric-alignment and reconstruction loss minimization for domain
generalization
- Authors: Boyang Lyu, Thuan Nguyen, Prakash Ishwar, Matthias Scheutz, Shuchin
Aeron
- Abstract summary: This paper advances the theory and practice of Domain Generalization (DG) in machine learning.
We propose a novel DG algorithm named Wasserstein Barycenter Auto-Encoder (WBAE) that simultaneously minimizes the classification loss, the barycenter loss, and the reconstruction loss.
Numerical results demonstrate that the proposed method outperforms current state-of-the-art DG algorithms on several datasets.
- Score: 30.459247038765568
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper advances the theory and practice of Domain Generalization (DG) in
machine learning. We consider the typical DG setting where the hypothesis is
composed of a representation mapping followed by a labeling function. Within
this setting, the majority of popular DG methods aim to jointly learn the
representation and the labeling functions by minimizing a well-known upper
bound for the classification risk in the unseen domain. In practice, however,
methods based on this theoretical upper bound ignore a term that cannot be
directly optimized due to its dual dependence on both the representation
mapping and the unknown optimal labeling function in the unseen domain. To
bridge this gap between theory and practice, we introduce a new upper bound
that is free of terms having such dual dependence, resulting in a fully
optimizable risk upper bound for the unseen domain. Our derivation leverages
classical and recent transport inequalities that link optimal transport metrics
with information-theoretic measures. Compared to previous bounds, our bound
introduces two new terms: (i) the Wasserstein-2 barycenter term that aligns
distributions between domains, and (ii) the reconstruction loss term that
assesses the quality of representation in reconstructing the original data.
Based on this new upper bound, we propose a novel DG algorithm named
Wasserstein Barycenter Auto-Encoder (WBAE) that simultaneously minimizes the
classification loss, the barycenter loss, and the reconstruction loss.
Numerical results demonstrate that the proposed method outperforms current
state-of-the-art DG algorithms on several datasets.
Related papers
- C-DGPA: Class-Centric Dual-Alignment Generative Prompt Adaptation [8.824565305964406]
Unsupervised Domain Adaptation transfers knowledge from a labeled source domain to an unlabeled target domain.<n>Existing prompt-tuning strategies primarily align marginal distribution discrepancies.<n>C-DGPA integrates domain knowl edge into prompt learning via synergistic optimization.<n>It achieves new state-of-the-art results on all benchmarks.
arXiv Detail & Related papers (2025-12-18T04:30:53Z) - Moment Alignment: Unifying Gradient and Hessian Matching for Domain Generalization [13.021311628351423]
Domain generalization (DG) seeks to develop models that generalize well to unseen target domains.<n>One line of research in DG focuses on aligning domain-level gradients and Hessians to enhance generalization.<n>We introduce textbfClosed-Form textbfMoment textbfAlignment (CMA), a novel DG algorithm that aligns domain-level gradients and Hessians in closed-form.
arXiv Detail & Related papers (2025-06-09T02:51:36Z) - Domain Adaptation via Rebalanced Sub-domain Alignment [22.68115322836635]
Unsupervised domain adaptation (UDA) is a technique used to transfer knowledge from a labeled source domain to a related unlabeled target domain.
Many UDA methods have shown success in the past, but they often assume that the source and target domains must have identical class label distributions.
We propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.
arXiv Detail & Related papers (2023-02-03T21:30:40Z) - Theoretical Guarantees for Domain Adaptation with Hierarchical Optimal
Transport [0.0]
Domain adaptation arises as an important problem in statistical learning theory.
Recent advances show that the success of domain adaptation algorithms heavily relies on their ability to minimize the divergence between the probability distributions of the source and target domains.
We propose a new theoretical framework for domain adaptation through hierarchical optimal transport.
arXiv Detail & Related papers (2022-10-24T15:34:09Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Relation Matters: Foreground-aware Graph-based Relational Reasoning for
Domain Adaptive Object Detection [81.07378219410182]
We propose a new and general framework for DomainD, named Foreground-aware Graph-based Reasoning (FGRR)
FGRR incorporates graph structures into the detection pipeline to explicitly model the intra- and inter-domain foreground object relations.
Empirical results demonstrate that the proposed FGRR exceeds the state-of-the-art on four DomainD benchmarks.
arXiv Detail & Related papers (2022-06-06T05:12:48Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - Maximum Density Divergence for Domain Adaptation [0.0]
Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain.
We propose a new domain adaptation method named Adversarial Tight Match (ATM) which enjoys the benefits of both adversarial training and metric learning.
arXiv Detail & Related papers (2020-04-27T07:35:06Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.