Gradual Domain Adaptation via Normalizing Flows
- URL: http://arxiv.org/abs/2206.11492v4
- Date: Tue, 23 Jan 2024 05:52:26 GMT
- Title: Gradual Domain Adaptation via Normalizing Flows
- Authors: Shogo Sagawa, Hideitsu Hino
- Abstract summary: A large gap exists between the source and target domains.
Gradual domain adaptation is one of the approaches used to address the problem.
We propose the use of normalizing flows to deal with this problem.
- Score: 2.7467053150385956
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standard domain adaptation methods do not work well when a large gap exists
between the source and target domains. Gradual domain adaptation is one of the
approaches used to address the problem. It involves leveraging the intermediate
domain, which gradually shifts from the source domain to the target domain. In
previous work, it is assumed that the number of intermediate domains is large
and the distance between adjacent domains is small; hence, the gradual domain
adaptation algorithm, involving self-training with unlabeled datasets, is
applicable. In practice, however, gradual self-training will fail because the
number of intermediate domains is limited and the distance between adjacent
domains is large. We propose the use of normalizing flows to deal with this
problem while maintaining the framework of unsupervised domain adaptation. The
proposed method learns a transformation from the distribution of the target
domain to the Gaussian mixture distribution via the source domain. We evaluate
our proposed method by experiments using real-world datasets and confirm that
it mitigates the above-explained problem and improves the classification
performance.
Related papers
- Gradual Domain Adaptation: Theory and Algorithms [15.278170387810409]
Unsupervised domain adaptation (UDA) adapts a model from a labeled source domain to an unlabeled target domain in a one-off way.
In this work, we first theoretically analyze gradual self-training, a popular GDA algorithm, and provide a significantly improved generalization bound.
We propose $textbfG$enerative Gradual D$textbfO$main $textbfA$daptation with Optimal $textbfT$ransport (GOAT)
arXiv Detail & Related papers (2023-10-20T23:02:08Z) - Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Cost-effective Framework for Gradual Domain Adaptation with
Multifidelity [3.6042575355093907]
In domain adaptation, when there is a large distance between the source and target domains, the prediction performance will degrade.
We propose a framework that combines multifidelity and active domain adaptation.
The effectiveness of the proposed method is evaluated by experiments with both artificial and real-world datasets.
arXiv Detail & Related papers (2022-02-09T09:44:39Z) - IDM: An Intermediate Domain Module for Domain Adaptive Person Re-ID [58.46907388691056]
We argue that the bridging between the source and target domains can be utilized to tackle the UDA re-ID task.
We propose an Intermediate Domain Module (IDM) to generate intermediate domains' representations on-the-fly.
Our proposed method outperforms the state-of-the-arts by a large margin in all the common UDA re-ID tasks.
arXiv Detail & Related papers (2021-08-05T07:19:46Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Unsupervised Cross-domain Image Classification by Distance Metric Guided
Feature Alignment [11.74643883335152]
Unsupervised domain adaptation is a promising avenue which transfers knowledge from a source domain to a target domain.
We propose distance metric guided feature alignment (MetFA) to extract discriminative as well as domain-invariant features on both source and target domains.
Our model integrates class distribution alignment to transfer semantic knowledge from a source domain to a target domain.
arXiv Detail & Related papers (2020-08-19T13:36:57Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.