Quantitatively Measuring and Contrastively Exploring Heterogeneity for
Domain Generalization
- URL: http://arxiv.org/abs/2305.15889v3
- Date: Sat, 11 Nov 2023 14:22:47 GMT
- Title: Quantitatively Measuring and Contrastively Exploring Heterogeneity for
Domain Generalization
- Authors: Yunze Tong, Junkun Yuan, Min Zhang, Didi Zhu, Keli Zhang, Fei Wu, Kun
Kuang
- Abstract summary: We propose Heterogeneity-based Two-stage Contrastive Learning (HTCL) for the Domain generalization task.
In the first stage, we generate the most heterogeneous dividing pattern with our contrastive metric.
In the second stage, we employ an in-aimed contrastive learning by re-building pairs with the stable relation hinted by domains and classes.
- Score: 38.50749918578154
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain generalization (DG) is a prevalent problem in real-world applications,
which aims to train well-generalized models for unseen target domains by
utilizing several source domains. Since domain labels, i.e., which domain each
data point is sampled from, naturally exist, most DG algorithms treat them as a
kind of supervision information to improve the generalization performance.
However, the original domain labels may not be the optimal supervision signal
due to the lack of domain heterogeneity, i.e., the diversity among domains. For
example, a sample in one domain may be closer to another domain, its original
label thus can be the noise to disturb the generalization learning. Although
some methods try to solve it by re-dividing domains and applying the newly
generated dividing pattern, the pattern they choose may not be the most
heterogeneous due to the lack of the metric for heterogeneity. In this paper,
we point out that domain heterogeneity mainly lies in variant features under
the invariant learning framework. With contrastive learning, we propose a
learning potential-guided metric for domain heterogeneity by promoting learning
variant features. Then we notice the differences between seeking variance-based
heterogeneity and training invariance-based generalizable model. We thus
propose a novel method called Heterogeneity-based Two-stage Contrastive
Learning (HTCL) for the DG task. In the first stage, we generate the most
heterogeneous dividing pattern with our contrastive metric. In the second
stage, we employ an invariance-aimed contrastive learning by re-building pairs
with the stable relation hinted by domains and classes, which better utilizes
generated domain labels for generalization learning. Extensive experiments show
HTCL better digs heterogeneity and yields great generalization performance.
Related papers
- Frequency Decomposition to Tap the Potential of Single Domain for
Generalization [10.555462823983122]
Domain generalization is a must-have characteristic of general artificial intelligence.
In this paper, it is determined that the domain invariant features could be contained in the single source domain training samples.
A new method that learns through multiple domains is proposed.
arXiv Detail & Related papers (2023-04-14T17:15:47Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing [53.82826073959756]
Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
arXiv Detail & Related papers (2021-05-06T06:04:59Z) - Heterogeneous Domain Generalization via Domain Mixup [0.0]
One of the main drawbacks of deep Convolutional Neural Networks (DCNN) is that they lack generalization capability.
We propose a novel heterogeneous domain generalization method by mixing up samples across multiple source domains.
Our experimental results based on the Visual Decathlon benchmark demonstrates the effectiveness of our proposed method.
arXiv Detail & Related papers (2020-09-11T13:53:56Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z) - Learning to Generate Novel Domains for Domain Generalization [115.21519842245752]
This paper focuses on the task of learning from multiple source domains a model that generalizes well to unseen domains.
We employ a data generator to synthesize data from pseudo-novel domains to augment the source domains.
Our method, L2A-OT, outperforms current state-of-the-art DG methods on four benchmark datasets.
arXiv Detail & Related papers (2020-07-07T09:34:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.