Trade-off between reconstruction loss and feature alignment for domain
generalization
- URL: http://arxiv.org/abs/2210.15000v1
- Date: Wed, 26 Oct 2022 19:40:25 GMT
- Title: Trade-off between reconstruction loss and feature alignment for domain
generalization
- Authors: Thuan Nguyen, Boyang Lyu, Prakash Ishwar, Matthias Scheutz, Shuchin
Aeron
- Abstract summary: Domain generalization (DG) is a branch of transfer learning that aims to train the learning models on several seen domains and subsequently apply these pre-trained models to other unseen (unknown but related) domains.
To deal with challenging settings in DG where both data and label of the unseen domain are not available at training time, the most common approach is to design the classifiers based on the domain-invariant representation features.
Contrary to popular belief, we show that designing classifiers based on invariant representation features alone is necessary but insufficient in DG.
- Score: 30.459247038765568
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Domain generalization (DG) is a branch of transfer learning that aims to
train the learning models on several seen domains and subsequently apply these
pre-trained models to other unseen (unknown but related) domains. To deal with
challenging settings in DG where both data and label of the unseen domain are
not available at training time, the most common approach is to design the
classifiers based on the domain-invariant representation features, i.e., the
latent representations that are unchanged and transferable between domains.
Contrary to popular belief, we show that designing classifiers based on
invariant representation features alone is necessary but insufficient in DG.
Our analysis indicates the necessity of imposing a constraint on the
reconstruction loss induced by representation functions to preserve most of the
relevant information about the label in the latent space. More importantly, we
point out the trade-off between minimizing the reconstruction loss and
achieving domain alignment in DG. Our theoretical results motivate a new DG
framework that jointly optimizes the reconstruction loss and the domain
discrepancy. Both theoretical and numerical results are provided to justify our
approach.
Related papers
- Disentangling Masked Autoencoders for Unsupervised Domain Generalization [57.56744870106124]
Unsupervised domain generalization is fast gaining attention but is still far from well-studied.
Disentangled Masked Auto (DisMAE) aims to discover the disentangled representations that faithfully reveal intrinsic features.
DisMAE co-trains the asymmetric dual-branch architecture with semantic and lightweight variation encoders.
arXiv Detail & Related papers (2024-07-10T11:11:36Z) - Gradually Vanishing Gap in Prototypical Network for Unsupervised Domain Adaptation [32.58201185195226]
We propose an efficient UDA framework named Gradually Vanishing Gap in Prototypical Network (GVG-PN)
Our model achieves transfer learning from both global and local perspectives.
Experiments on several UDA benchmarks validated that the proposed GVG-PN can clearly outperform the SOTA models.
arXiv Detail & Related papers (2024-05-28T03:03:32Z) - Transitive Vision-Language Prompt Learning for Domain Generalization [41.484858946789664]
The vision-language pre-training has enabled deep models to make a huge step forward in generalizing across unseen domains.
However, there are still some issues that an advancement still suffers from trading-off between domain invariance and class separability.
arXiv Detail & Related papers (2024-04-29T14:56:11Z) - Complementary Domain Adaptation and Generalization for Unsupervised
Continual Domain Shift Learning [4.921899151930171]
Unsupervised continual domain shift learning is a significant challenge in real-world applications.
We propose Complementary Domain Adaptation and Generalization (CoDAG), a simple yet effective learning framework.
Our approach is model-agnostic, meaning that it is compatible with any existing domain adaptation and generalization algorithms.
arXiv Detail & Related papers (2023-03-28T09:05:15Z) - Relation Matters: Foreground-aware Graph-based Relational Reasoning for
Domain Adaptive Object Detection [81.07378219410182]
We propose a new and general framework for DomainD, named Foreground-aware Graph-based Reasoning (FGRR)
FGRR incorporates graph structures into the detection pipeline to explicitly model the intra- and inter-domain foreground object relations.
Empirical results demonstrate that the proposed FGRR exceeds the state-of-the-art on four DomainD benchmarks.
arXiv Detail & Related papers (2022-06-06T05:12:48Z) - Towards Principled Disentanglement for Domain Generalization [90.9891372499545]
A fundamental challenge for machine learning models is generalizing to out-of-distribution (OOD) data.
We first formalize the OOD generalization problem as constrained optimization, called Disentanglement-constrained Domain Generalization (DDG)
Based on the transformation, we propose a primal-dual algorithm for joint representation disentanglement and domain generalization.
arXiv Detail & Related papers (2021-11-27T07:36:32Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.