Rethinking Multi-domain Generalization with A General Learning Objective
- URL: http://arxiv.org/abs/2402.18853v1
- Date: Thu, 29 Feb 2024 05:00:30 GMT
- Title: Rethinking Multi-domain Generalization with A General Learning Objective
- Authors: Zhaorui Tan, Xi Yang, Kaizhu Huang
- Abstract summary: Multi-domain generalization (mDG) is universally aimed to minimize discrepancy between training and testing distributions.
Existing mDG literature lacks a general learning objective paradigm.
We propose to leverage a $Y$-mapping to relax the constraint.
- Score: 19.28143363034362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-domain generalization (mDG) is universally aimed to minimize the
discrepancy between training and testing distributions to enhance
marginal-to-label distribution mapping. However, existing mDG literature lacks
a general learning objective paradigm and often imposes constraints on static
target marginal distributions. In this paper, we propose to leverage a
$Y$-mapping to relax the constraint. We rethink the learning objective for mDG
and design a new \textbf{general learning objective} to interpret and analyze
most existing mDG wisdom. This general objective is bifurcated into two
synergistic amis: learning domain-independent conditional features and
maximizing a posterior. Explorations also extend to two effective
regularization terms that incorporate prior information and suppress invalid
causality, alleviating the issues that come with relaxed constraints. We
theoretically contribute an upper bound for the domain alignment of
domain-independent conditional features, disclosing that many previous mDG
endeavors actually \textbf{optimize partially the objective} and thus lead to
limited performance. As such, our study distills a general learning objective
into four practical components, providing a general, robust, and flexible
mechanism to handle complex domain shifts. Extensive empirical results indicate
that the proposed objective with $Y$-mapping leads to substantially better mDG
performance in various downstream tasks, including regression, segmentation,
and classification.
Related papers
- LFME: A Simple Framework for Learning from Multiple Experts in Domain Generalization [61.16890890570814]
Domain generalization (DG) methods aim to maintain good performance in an unseen target domain by using training data from multiple source domains.
This work introduces a simple yet effective framework, dubbed learning from multiple experts (LFME) that aims to make the target model an expert in all source domains to improve DG.
arXiv Detail & Related papers (2024-10-22T13:44:10Z) - QT-DoG: Quantization-aware Training for Domain Generalization [58.439816306817306]
We propose Quantization-aware Training for Domain Generalization (QT-DoG)
QT-DoG exploits quantization as an implicit regularizer by inducing noise in model weights.
We demonstrate that QT-DoG generalizes across various datasets, architectures, and quantization algorithms.
arXiv Detail & Related papers (2024-10-08T13:21:48Z) - MADG: Margin-based Adversarial Learning for Domain Generalization [25.45950080930517]
We propose a novel adversarial learning DG algorithm, MADG, motivated by a margin loss-based discrepancy metric.
The proposed MADG model learns domain-invariant features across all source domains and uses adversarial training to generalize well to the unseen target domain.
We extensively experiment with the MADG model on popular real-world DG datasets.
arXiv Detail & Related papers (2023-11-14T19:53:09Z) - Improving Generalization with Domain Convex Game [32.07275105040802]
Domain generalization tends to alleviate the poor generalization capability of deep neural networks by learning model with multiple source domains.
A classical solution to DG is domain augmentation, the common belief of which is that diversifying source domains will be conducive to the out-of-distribution generalization.
Our explorations reveal that the correlation between model generalization and the diversity of domains may be not strictly positive, which limits the effectiveness of domain augmentation.
arXiv Detail & Related papers (2023-03-23T14:27:49Z) - Label-Efficient Domain Generalization via Collaborative Exploration and
Generalization [28.573872986524794]
This paper introduces label-efficient domain generalization (LEDG) to enable model generalization with label-limited source domains.
We propose a novel framework called Collaborative Exploration and Generalization (CEG) which jointly optimize active exploration and semi-supervised generalization.
arXiv Detail & Related papers (2022-08-07T05:34:50Z) - Localized Adversarial Domain Generalization [83.4195658745378]
Adversarial domain generalization is a popular approach to domain generalization.
We propose localized adversarial domain generalization with space compactness maintenance(LADG)
We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach.
arXiv Detail & Related papers (2022-05-09T08:30:31Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.