Learning to Diversify for Single Domain Generalization
- URL: http://arxiv.org/abs/2108.11726v3
- Date: Wed, 22 Mar 2023 09:13:00 GMT
- Title: Learning to Diversify for Single Domain Generalization
- Authors: Zijian Wang, Yadan Luo, Ruihong Qiu, Zi Huang, Mahsa Baktashmotlagh
- Abstract summary: Domain generalization (DG) aims to generalize a model trained on multiple source (i.e., training) domains to a distributionally different target (i.e., test) domain.
This paper considers a more realistic yet challenging scenario, namely Single Domain Generalization (Single-DG), where only one source domain is available for training.
In this scenario, the limited diversity may jeopardize the model generalization on unseen target domains.
We propose a style-complement module to enhance the generalization power of the model by synthesizing images from diverse distributions that are complementary to the source ones.
- Score: 46.35670520201863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization (DG) aims to generalize a model trained on multiple
source (i.e., training) domains to a distributionally different target (i.e.,
test) domain. In contrast to the conventional DG that strictly requires the
availability of multiple source domains, this paper considers a more realistic
yet challenging scenario, namely Single Domain Generalization (Single-DG),
where only one source domain is available for training. In this scenario, the
limited diversity may jeopardize the model generalization on unseen target
domains. To tackle this problem, we propose a style-complement module to
enhance the generalization power of the model by synthesizing images from
diverse distributions that are complementary to the source ones. More
specifically, we adopt a tractable upper bound of mutual information (MI)
between the generated and source samples and perform a two-step optimization
iteratively: (1) by minimizing the MI upper bound approximation for each sample
pair, the generated images are forced to be diversified from the source
samples; (2) subsequently, we maximize the MI between the samples from the same
semantic category, which assists the network to learn discriminative features
from diverse-styled images. Extensive experiments on three benchmark datasets
demonstrate the superiority of our approach, which surpasses the
state-of-the-art single-DG methods by up to 25.14%.
Related papers
- PracticalDG: Perturbation Distillation on Vision-Language Models for Hybrid Domain Generalization [24.413415998529754]
We propose a new benchmark Hybrid Domain Generalization (HDG) and a novel metric $H2$-CV, which construct various splits to assess the robustness of algorithms.
Our method outperforms state-of-the-art algorithms on multiple datasets, especially improving the robustness when confronting data scarcity.
arXiv Detail & Related papers (2024-04-13T13:41:13Z) - Uncertainty-guided Contrastive Learning for Single Source Domain Generalisation [15.907643838530655]
In this paper, we introduce a novel model referred to as Contrastive Uncertainty Domain Generalisation Network (CUDGNet)
The key idea is to augment the source capacity in both input and label spaces through the fictitious domain generator.
Our method also provides efficient uncertainty estimation at inference time from a single forward pass through the generator subnetwork.
arXiv Detail & Related papers (2024-03-12T10:47:45Z) - Modality-Agnostic Debiasing for Single Domain Generalization [105.60451710436735]
We introduce a versatile Modality-Agnostic Debiasing (MAD) framework for single-DG.
We show that MAD improves DSU by 2.82% and 1.5% in accuracy and mIOU.
More remarkably, for recognition on 3D point clouds and semantic segmentation on 2D images, MAD improves DSU by 1.5% in accuracy and mIOU.
arXiv Detail & Related papers (2023-03-13T13:56:11Z) - Causality-based Dual-Contrastive Learning Framework for Domain
Generalization [16.81075442901155]
Domain Generalization (DG) is essentially a sub-branch of out-of-distribution generalization.
In this paper, we propose a Dual-Contrastive Learning (DCL) module on feature and prototype contrast.
We also introduce a Similarity-based Hard-pair Mining (SHM) strategy to leverage information on diversity shift.
arXiv Detail & Related papers (2023-01-22T13:07:24Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.