Domain-invariant Feature Exploration for Domain Generalization
- URL: http://arxiv.org/abs/2207.12020v1
- Date: Mon, 25 Jul 2022 09:55:55 GMT
- Title: Domain-invariant Feature Exploration for Domain Generalization
- Authors: Wang Lu, Jindong Wang, Haoliang Li, Yiqiang Chen, Xing Xie
- Abstract summary: We argue that domain-invariant features should be originating from both internal and mutual sides.
We propose DIFEX for Domain-Invariant Feature EXploration.
Experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.
- Score: 35.99082628524934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has achieved great success in the past few years. However, the
performance of deep learning is likely to impede in face of non-IID situations.
Domain generalization (DG) enables a model to generalize to an unseen test
distribution, i.e., to learn domain-invariant representations. In this paper,
we argue that domain-invariant features should be originating from both
internal and mutual sides. Internal invariance means that the features can be
learned with a single domain and the features capture intrinsic semantics of
data, i.e., the property within a domain, which is agnostic to other domains.
Mutual invariance means that the features can be learned with multiple domains
(cross-domain) and the features contain common information, i.e., the
transferable features w.r.t. other domains. We then propose DIFEX for
Domain-Invariant Feature EXploration. DIFEX employs a knowledge distillation
framework to capture the high-level Fourier phase as the internally-invariant
features and learn cross-domain correlation alignment as the mutually-invariant
features. We further design an exploration loss to increase the feature
diversity for better generalization. Extensive experiments on both time-series
and visual benchmarks demonstrate that the proposed DIFEX achieves
state-of-the-art performance.
Related papers
- Cross-Domain Feature Augmentation for Domain Generalization [16.174824932970004]
We propose a cross-domain feature augmentation method named XDomainMix.
Experiments on widely used benchmark datasets demonstrate that our proposed method is able to achieve state-of-the-art performance.
arXiv Detail & Related papers (2024-05-14T13:24:19Z) - DIGIC: Domain Generalizable Imitation Learning by Causal Discovery [69.13526582209165]
Causality has been combined with machine learning to produce robust representations for domain generalization.
We make a different attempt by leveraging the demonstration data distribution to discover causal features for a domain generalizable policy.
We design a novel framework, called DIGIC, to identify the causal features by finding the direct cause of the expert action from the demonstration data distribution.
arXiv Detail & Related papers (2024-02-29T07:09:01Z) - Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - Frequency Decomposition to Tap the Potential of Single Domain for
Generalization [10.555462823983122]
Domain generalization is a must-have characteristic of general artificial intelligence.
In this paper, it is determined that the domain invariant features could be contained in the single source domain training samples.
A new method that learns through multiple domains is proposed.
arXiv Detail & Related papers (2023-04-14T17:15:47Z) - Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization [9.577254317971933]
We argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks.
We propose a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images.
We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space.
arXiv Detail & Related papers (2023-02-05T09:48:57Z) - Unsupervised Domain Adaptation via Style-Aware Self-intermediate Domain [52.783709712318405]
Unsupervised domain adaptation (UDA) has attracted considerable attention, which transfers knowledge from a label-rich source domain to a related but unlabeled target domain.
We propose a novel style-aware feature fusion method (SAFF) to bridge the large domain gap and transfer knowledge while alleviating the loss of class-discnative information.
arXiv Detail & Related papers (2022-09-05T10:06:03Z) - Learning Transferable Parameters for Unsupervised Domain Adaptation [29.962241958947306]
Untrivial domain adaptation (UDA) enables a learning machine to adapt from a labeled source domain to an unlabeled domain under the distribution shift.
We propose Transferable Learning (TransPar) to reduce the side effect brought by domain-specific information in the learning process.
arXiv Detail & Related papers (2021-08-13T09:09:15Z) - Quantifying and Improving Transferability in Domain Generalization [53.16289325326505]
Out-of-distribution generalization is one of the key challenges when transferring a model from the lab to the real world.
We formally define transferability that one can quantify and compute in domain generalization.
We propose a new algorithm for learning transferable features and test it over various benchmark datasets.
arXiv Detail & Related papers (2021-06-07T14:04:32Z) - Heuristic Domain Adaptation [105.59792285047536]
Heuristic Domain Adaptation Network (HDAN) explicitly learns the domain-invariant and domain-specific representations.
Heuristic Domain Adaptation Network (HDAN) has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA.
arXiv Detail & Related papers (2020-11-30T04:21:35Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.