Quantifying and Improving Transferability in Domain Generalization
- URL: http://arxiv.org/abs/2106.03632v1
- Date: Mon, 7 Jun 2021 14:04:32 GMT
- Title: Quantifying and Improving Transferability in Domain Generalization
- Authors: Guojun Zhang, Han Zhao, Yaoliang Yu, Pascal Poupart
- Abstract summary: Out-of-distribution generalization is one of the key challenges when transferring a model from the lab to the real world.
We formally define transferability that one can quantify and compute in domain generalization.
We propose a new algorithm for learning transferable features and test it over various benchmark datasets.
- Score: 53.16289325326505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution generalization is one of the key challenges when
transferring a model from the lab to the real world. Existing efforts mostly
focus on building invariant features among source and target domains. Based on
invariant features, a high-performing classifier on source domains could
hopefully behave equally well on a target domain. In other words, the invariant
features are \emph{transferable}. However, in practice, there are no perfectly
transferable features, and some algorithms seem to learn ''more transferable''
features than others. How can we understand and quantify such
\emph{transferability}? In this paper, we formally define transferability that
one can quantify and compute in domain generalization. We point out the
difference and connection with common discrepancy measures between domains,
such as total variation and Wasserstein distance. We then prove that our
transferability can be estimated with enough samples and give a new upper bound
for the target error based on our transferability. Empirically, we evaluate the
transferability of the feature embeddings learned by existing algorithms for
domain generalization. Surprisingly, we find that many algorithms are not quite
learning transferable features, although few could still survive. In light of
this, we propose a new algorithm for learning transferable features and test it
over various benchmark datasets, including RotatedMNIST, PACS, Office-Home and
WILDS-FMoW. Experimental results show that the proposed algorithm achieves
consistent improvement over many state-of-the-art algorithms, corroborating our
theoretical findings.
Related papers
- Distributional Shift Adaptation using Domain-Specific Features [41.91388601229745]
In open-world scenarios, streaming big data can be Out-Of-Distribution (OOD)
We propose a simple yet effective approach that relies on correlations in general regardless of whether the features are invariant or not.
Our approach uses the most confidently predicted samples identified by an OOD base model to train a new model that effectively adapts to the target domain.
arXiv Detail & Related papers (2022-11-09T04:16:21Z) - Domain-invariant Feature Exploration for Domain Generalization [35.99082628524934]
We argue that domain-invariant features should be originating from both internal and mutual sides.
We propose DIFEX for Domain-Invariant Feature EXploration.
Experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-07-25T09:55:55Z) - Learning Transferable Parameters for Unsupervised Domain Adaptation [29.962241958947306]
Untrivial domain adaptation (UDA) enables a learning machine to adapt from a labeled source domain to an unlabeled domain under the distribution shift.
We propose Transferable Learning (TransPar) to reduce the side effect brought by domain-specific information in the learning process.
arXiv Detail & Related papers (2021-08-13T09:09:15Z) - Invariant Information Bottleneck for Domain Generalization [39.62337297660974]
We propose a novel algorithm that learns a minimally sufficient representation that is invariant across training and testing domains.
By minimizing the mutual information between the representation and inputs, IIB alleviates its reliance on pseudo-invariant features.
The results show that IIB outperforms invariant learning baseline (e.g. IRM) by an average of 2.8% and 3.8% accuracy over two evaluation metrics.
arXiv Detail & Related papers (2021-06-11T12:12:40Z) - A Bit More Bayesian: Domain-Invariant Learning with Uncertainty [111.22588110362705]
Domain generalization is challenging due to the domain shift and the uncertainty caused by the inaccessibility of target domain data.
In this paper, we address both challenges with a probabilistic framework based on variational Bayesian inference.
We derive domain-invariant representations and classifiers, which are jointly established in a two-layer Bayesian neural network.
arXiv Detail & Related papers (2021-05-09T21:33:27Z) - Transferable Semantic Augmentation for Domain Adaptation [14.623272346517794]
We propose a Transferable Semantic Augmentation (TSA) approach to enhance the classifier adaptation ability.
TSA implicitly generates source features towards target semantics.
As a light-weight and general technique, TSA can be easily plugged into various domain adaptation methods.
arXiv Detail & Related papers (2021-03-23T14:04:11Z) - A Theory of Label Propagation for Subpopulation Shift [61.408438422417326]
We propose a provably effective framework for domain adaptation based on label propagation.
We obtain end-to-end finite-sample guarantees on the entire algorithm.
We extend our theoretical framework to a more general setting of source-to-target transfer based on a third unlabeled dataset.
arXiv Detail & Related papers (2021-02-22T17:27:47Z) - Heuristic Domain Adaptation [105.59792285047536]
Heuristic Domain Adaptation Network (HDAN) explicitly learns the domain-invariant and domain-specific representations.
Heuristic Domain Adaptation Network (HDAN) has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA.
arXiv Detail & Related papers (2020-11-30T04:21:35Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Gradually Vanishing Bridge for Adversarial Domain Adaptation [156.46378041408192]
We equip adversarial domain adaptation with Gradually Vanishing Bridge (GVB) mechanism on both generator and discriminator.
On the generator, GVB could not only reduce the overall transfer difficulty, but also reduce the influence of the residual domain-specific characteristics.
On the discriminator, GVB contributes to enhance the discriminating ability, and balance the adversarial training process.
arXiv Detail & Related papers (2020-03-30T01:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.