Class-conditioned Domain Generalization via Wasserstein Distributional
Robust Optimization
- URL: http://arxiv.org/abs/2109.03676v1
- Date: Wed, 8 Sep 2021 14:23:03 GMT
- Title: Class-conditioned Domain Generalization via Wasserstein Distributional
Robust Optimization
- Authors: Jingge Wang, Yang Li, Liyan Xie, Yao Xie
- Abstract summary: Given multiple source domains, domain generalization aims at learning a universal model that performs well on any unseen but related target domain.
Existing approaches are not sufficiently robust when the variation of conditional distributions given the same class is large.
We extend the concept of distributional robust optimization to solve the class-conditional domain generalization problem.
- Score: 12.10885662305154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given multiple source domains, domain generalization aims at learning a
universal model that performs well on any unseen but related target domain. In
this work, we focus on the domain generalization scenario where domain shifts
occur among class-conditional distributions of different domains. Existing
approaches are not sufficiently robust when the variation of conditional
distributions given the same class is large. In this work, we extend the
concept of distributional robust optimization to solve the class-conditional
domain generalization problem. Our approach optimizes the worst-case
performance of a classifier over class-conditional distributions within a
Wasserstein ball centered around the barycenter of the source conditional
distributions. We also propose an iterative algorithm for learning the optimal
radius of the Wasserstein balls automatically. Experiments show that the
proposed framework has better performance on unseen target domain than
approaches without domain generalization.
Related papers
- Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Maximizing Conditional Independence for Unsupervised Domain Adaptation [9.533515002375545]
We study how to transfer a learner from a labeled source domain to an unlabeled target domain with different distributions.
In addition to unsupervised domain adaptation, we extend our method to the multi-source scenario in a natural and elegant way.
arXiv Detail & Related papers (2022-03-07T08:59:21Z) - Discriminative Domain-Invariant Adversarial Network for Deep Domain
Generalization [33.84004077585957]
We propose a discriminative domain-invariant adversarial network (DDIAN) for domain generalization.
DDIAN achieves better prediction on unseen target data during training compared to state-of-the-art domain generalization approaches.
arXiv Detail & Related papers (2021-08-20T04:24:12Z) - Adaptive Domain-Specific Normalization for Generalizable Person
Re-Identification [81.30327016286009]
We propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
In this work, we propose a novel adaptive domain-specific normalization approach (AdsNorm) for generalizable person Re-ID.
arXiv Detail & Related papers (2021-05-07T02:54:55Z) - Gradient Matching for Domain Generalization [93.04545793814486]
A critical requirement of machine learning systems is their ability to generalize to unseen domains.
We propose an inter-domain gradient matching objective that targets domain generalization.
We derive a simpler first-order algorithm named Fish that approximates its optimization.
arXiv Detail & Related papers (2021-04-20T12:55:37Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Model-Based Domain Generalization [96.84818110323518]
We propose a novel approach for the domain generalization problem called Model-Based Domain Generalization.
Our algorithms beat the current state-of-the-art methods on the very-recently-proposed WILDS benchmark by up to 20 percentage points.
arXiv Detail & Related papers (2021-02-23T00:59:02Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - Universal Domain Adaptation through Self Supervision [75.04598763659969]
Unsupervised domain adaptation methods assume that all source categories are present in the target domain.
We propose Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE) to handle arbitrary category shift.
We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings.
arXiv Detail & Related papers (2020-02-19T01:26:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.