On Learning Domain-Invariant Representations for Transfer Learning with
Multiple Sources
- URL: http://arxiv.org/abs/2111.13822v1
- Date: Sat, 27 Nov 2021 06:14:28 GMT
- Title: On Learning Domain-Invariant Representations for Transfer Learning with
Multiple Sources
- Authors: Trung Phung, Trung Le, Long Vuong, Toan Tran, Anh Tran, Hung Bui, Dinh
Phung
- Abstract summary: We develop novel upper-bounds for the target general loss which appeal to us to define two kinds of domain-invariant representations.
We study the pros and cons as well as the trade-offs of enforcing learning each domain-invariant representation.
- Score: 21.06231751703114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation (DA) benefits from the rigorous theoretical works that
study its insightful characteristics and various aspects, e.g., learning
domain-invariant representations and its trade-off. However, it seems not the
case for the multiple source DA and domain generalization (DG) settings which
are remarkably more complicated and sophisticated due to the involvement of
multiple source domains and potential unavailability of target domain during
training. In this paper, we develop novel upper-bounds for the target general
loss which appeal to us to define two kinds of domain-invariant
representations. We further study the pros and cons as well as the trade-offs
of enforcing learning each domain-invariant representation. Finally, we conduct
experiments to inspect the trade-off of these representations for offering
practical hints regarding how to use them in practice and explore other
interesting properties of our developed theory.
Related papers
- Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization [9.577254317971933]
We argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks.
We propose a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images.
We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space.
arXiv Detail & Related papers (2023-02-05T09:48:57Z) - Learning to Learn Domain-invariant Parameters for Domain Generalization [29.821634033299855]
Domain generalization (DG) aims to overcome this issue by capturing domain-invariant representations from source domains.
We propose two modules of Domain Decoupling and Combination (DDC) and Domain-invariance-guided Backpropagation (DIGB)
Our proposed method has achieved state-of-the-art performance with strong generalization capability.
arXiv Detail & Related papers (2022-11-04T07:19:34Z) - Domain-invariant Feature Exploration for Domain Generalization [35.99082628524934]
We argue that domain-invariant features should be originating from both internal and mutual sides.
We propose DIFEX for Domain-Invariant Feature EXploration.
Experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-07-25T09:55:55Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - Unsupervised Domain Generalization by Learning a Bridge Across Domains [78.855606355957]
Unsupervised Domain Generalization (UDG) setup has no training supervision in neither source nor target domains.
Our approach is based on self-supervised learning of a Bridge Across Domains (BrAD) - an auxiliary bridge domain accompanied by a set of semantics preserving visual (image-to-image) mappings to BrAD from each of the training domains.
We show how using an edge-regularized BrAD our approach achieves significant gains across multiple benchmarks and a range of tasks, including UDG, Few-shot UDA, and unsupervised generalization across multi-domain datasets.
arXiv Detail & Related papers (2021-12-04T10:25:45Z) - TAL: Two-stream Adaptive Learning for Generalizable Person
Re-identification [115.31432027711202]
We argue that both domain-specific and domain-invariant features are crucial for improving the generalization ability of re-id models.
We name two-stream adaptive learning (TAL) to simultaneously model these two kinds of information.
Our framework can be applied to both single-source and multi-source domain generalization tasks.
arXiv Detail & Related papers (2021-11-29T01:27:42Z) - Heuristic Domain Adaptation [105.59792285047536]
Heuristic Domain Adaptation Network (HDAN) explicitly learns the domain-invariant and domain-specific representations.
Heuristic Domain Adaptation Network (HDAN) has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA.
arXiv Detail & Related papers (2020-11-30T04:21:35Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Learning to Learn with Variational Information Bottleneck for Domain
Generalization [128.90691697063616]
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift.
We introduce a probabilistic meta-learning model for domain generalization, in which parameters shared across domains are modeled as distributions.
To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB.
arXiv Detail & Related papers (2020-07-15T12:05:52Z) - Representation via Representations: Domain Generalization via
Adversarially Learned Invariant Representations [14.751829773340537]
We examine adversarial censoring techniques for learning invariant representations from multiple "studies" (or domains)
In many contexts, such as medical forecasting, domain generalization from studies in populous areas provides fairness of a different flavor, not anticipated in previous work on algorithmic fairness.
arXiv Detail & Related papers (2020-06-20T02:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.