Learning to Learn Single Domain Generalization
- URL: http://arxiv.org/abs/2003.13216v1
- Date: Mon, 30 Mar 2020 04:39:53 GMT
- Title: Learning to Learn Single Domain Generalization
- Authors: Fengchun Qiao, Long Zhao, Xi Peng
- Abstract summary: We propose a new method named adversarial domain augmentation to solve this Out-of-Distribution (OOD) generalization problem.
The key idea is to leverage adversarial training to create "fictitious" yet "challenging" populations.
To facilitate fast and desirable domain augmentation, we cast the model training in a meta-learning scheme and use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case constraint.
- Score: 18.72451358284104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are concerned with a worst-case scenario in model generalization, in the
sense that a model aims to perform well on many unseen domains while there is
only one single domain available for training. We propose a new method named
adversarial domain augmentation to solve this Out-of-Distribution (OOD)
generalization problem. The key idea is to leverage adversarial training to
create "fictitious" yet "challenging" populations, from which a model can learn
to generalize with theoretical guarantees. To facilitate fast and desirable
domain augmentation, we cast the model training in a meta-learning scheme and
use a Wasserstein Auto-Encoder (WAE) to relax the widely used worst-case
constraint. Detailed theoretical analysis is provided to testify our
formulation, while extensive experiments on multiple benchmark datasets
indicate its superior performance in tackling single domain generalization.
Related papers
- Non-stationary Domain Generalization: Theory and Algorithm [11.781050299571692]
In this paper, we study domain generalization in non-stationary environment.
We first examine the impact of environmental non-stationarity on model performance.
Then, we propose a novel algorithm based on adaptive invariant representation learning.
arXiv Detail & Related papers (2024-05-10T21:32:43Z) - Continuous Unsupervised Domain Adaptation Using Stabilized
Representations and Experience Replay [23.871860648919593]
We introduce an algorithm for tackling the problem of unsupervised domain adaptation (UDA) in continual learning (CL) scenarios.
Our solution is based on stabilizing the learned internal distribution to enhances the model generalization on new domains.
We leverage experience replay to overcome the problem of catastrophic forgetting, where the model loses previously acquired knowledge when learning new tasks.
arXiv Detail & Related papers (2024-01-31T05:09:14Z) - Improving Generalization with Domain Convex Game [32.07275105040802]
Domain generalization tends to alleviate the poor generalization capability of deep neural networks by learning model with multiple source domains.
A classical solution to DG is domain augmentation, the common belief of which is that diversifying source domains will be conducive to the out-of-distribution generalization.
Our explorations reveal that the correlation between model generalization and the diversity of domains may be not strictly positive, which limits the effectiveness of domain augmentation.
arXiv Detail & Related papers (2023-03-23T14:27:49Z) - When Neural Networks Fail to Generalize? A Model Sensitivity Perspective [82.36758565781153]
Domain generalization (DG) aims to train a model to perform well in unseen domains under different distributions.
This paper considers a more realistic yet more challenging scenario, namely Single Domain Generalization (Single-DG)
We empirically ascertain a property of a model that correlates strongly with its generalization that we coin as "model sensitivity"
We propose a novel strategy of Spectral Adversarial Data Augmentation (SADA) to generate augmented images targeted at the highly sensitive frequencies.
arXiv Detail & Related papers (2022-12-01T20:15:15Z) - Single-domain Generalization in Medical Image Segmentation via Test-time
Adaptation from Shape Dictionary [64.5632303184502]
Domain generalization typically requires data from multiple source domains for model learning.
This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains.
We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains.
arXiv Detail & Related papers (2022-06-29T08:46:27Z) - Unsupervised Domain Generalization for Person Re-identification: A
Domain-specific Adaptive Framework [50.88463458896428]
Domain generalization (DG) has attracted much attention in person re-identification (ReID) recently.
Existing methods usually need the source domains to be labeled, which could be a significant burden for practical ReID tasks.
We propose a simple and efficient domain-specific adaptive framework, and realize it with an adaptive normalization module.
arXiv Detail & Related papers (2021-11-30T02:35:51Z) - Self-balanced Learning For Domain Generalization [64.99791119112503]
Domain generalization aims to learn a prediction model on multi-domain source data such that the model can generalize to a target domain with unknown statistics.
Most existing approaches have been developed under the assumption that the source data is well-balanced in terms of both domain and class.
We propose a self-balanced domain generalization framework that adaptively learns the weights of losses to alleviate the bias caused by different distributions of the multi-domain source data.
arXiv Detail & Related papers (2021-08-31T03:17:54Z) - f-Domain-Adversarial Learning: Theory and Algorithms [82.97698406515667]
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain.
We derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences.
arXiv Detail & Related papers (2021-06-21T18:21:09Z) - Domain Generalization using Ensemble Learning [0.0]
We tackle the problem of a model's weak generalization when it is trained on a single source domain.
From this perspective, we build an ensemble model on top of base deep learning models trained on a single source to enhance the generalization of their collective prediction.
arXiv Detail & Related papers (2021-03-18T13:50:36Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.