Cost-effective Framework for Gradual Domain Adaptation with
Multifidelity
- URL: http://arxiv.org/abs/2202.04359v1
- Date: Wed, 9 Feb 2022 09:44:39 GMT
- Title: Cost-effective Framework for Gradual Domain Adaptation with
Multifidelity
- Authors: Shogo Sagawa and Hideitsu Hino
- Abstract summary: In domain adaptation, when there is a large distance between the source and target domains, the prediction performance will degrade.
We propose a framework that combines multifidelity and active domain adaptation.
The effectiveness of the proposed method is evaluated by experiments with both artificial and real-world datasets.
- Score: 3.6042575355093907
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In domain adaptation, when there is a large distance between the source and
target domains, the prediction performance will degrade. Gradual domain
adaptation is one of the solutions to such an issue, assuming that we have
access to intermediate domains, which shift gradually from the source to target
domains. In previous works, it was assumed that the number of samples in the
intermediate domains is sufficiently large; hence, self-training was possible
without the need for labeled data. If access to an intermediate domain is
restricted, self-training will fail. Practically, the cost of samples in
intermediate domains will vary, and it is natural to consider that the closer
an intermediate domain is to the target domain, the higher the cost of
obtaining samples from the intermediate domain is. To solve the trade-off
between cost and accuracy, we propose a framework that combines multifidelity
and active domain adaptation. The effectiveness of the proposed method is
evaluated by experiments with both artificial and real-world datasets. Codes
are available at https://github.com/ssgw320/gdamf.
Related papers
- Gradual Domain Adaptation: Theory and Algorithms [15.278170387810409]
Unsupervised domain adaptation (UDA) adapts a model from a labeled source domain to an unlabeled target domain in a one-off way.
In this work, we first theoretically analyze gradual self-training, a popular GDA algorithm, and provide a significantly improved generalization bound.
We propose $textbfG$enerative Gradual D$textbfO$main $textbfA$daptation with Optimal $textbfT$ransport (GOAT)
arXiv Detail & Related papers (2023-10-20T23:02:08Z) - Gradual Domain Adaptation via Normalizing Flows [2.7467053150385956]
A large gap exists between the source and target domains.
Gradual domain adaptation is one of the approaches used to address the problem.
We propose the use of normalizing flows to deal with this problem.
arXiv Detail & Related papers (2022-06-23T06:24:50Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Gradual Domain Adaptation via Self-Training of Auxiliary Models [50.63206102072175]
Domain adaptation becomes more challenging with increasing gaps between source and target domains.
We propose self-training of auxiliary models (AuxSelfTrain) that learns models for intermediate domains.
Experiments on benchmark datasets of unsupervised and semi-supervised domain adaptation verify its efficacy.
arXiv Detail & Related papers (2021-06-18T03:15:25Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Dynamic Transfer for Multi-Source Domain Adaptation [82.54405157719641]
We present dynamic transfer to address domain conflicts, where the model parameters are adapted to samples.
It breaks down source domain barriers and turns multi-source domains into a single-source domain.
Experimental results show that, without using domain labels, our dynamic transfer outperforms the state-of-the-art method by more than 3%.
arXiv Detail & Related papers (2021-03-19T01:22:12Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.