Rethinking Distributional Matching Based Domain Adaptation
- URL: http://arxiv.org/abs/2006.13352v2
- Date: Fri, 3 Jul 2020 07:00:54 GMT
- Title: Rethinking Distributional Matching Based Domain Adaptation
- Authors: Bo Li, Yezhen Wang, Tong Che, Shanghang Zhang, Sicheng Zhao, Pengfei
Xu, Wei Zhou, Yoshua Bengio, Kurt Keutzer
- Abstract summary: Domain adaptation (DA) is a technique that transfers predictive models trained on a labeled source domain to an unlabeled target domain.
Most popular DA algorithms are based on distributional matching (DM)
In this paper, we first systematically analyze the limitations of DM based methods, and then build new benchmarks with more realistic domain shifts.
- Score: 111.15106414932413
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain adaptation (DA) is a technique that transfers predictive models
trained on a labeled source domain to an unlabeled target domain, with the core
difficulty of resolving distributional shift between domains. Currently, most
popular DA algorithms are based on distributional matching (DM). However in
practice, realistic domain shifts (RDS) may violate their basic assumptions and
as a result these methods will fail. In this paper, in order to devise robust
DA algorithms, we first systematically analyze the limitations of DM based
methods, and then build new benchmarks with more realistic domain shifts to
evaluate the well-accepted DM methods. We further propose InstaPBM, a novel
Instance-based Predictive Behavior Matching method for robust DA. Extensive
experiments on both conventional and RDS benchmarks demonstrate both the
limitations of DM methods and the efficacy of InstaPBM: Compared with the best
baselines, InstaPBM improves the classification accuracy respectively by
$4.5\%$, $3.9\%$ on Digits5, VisDA2017, and $2.2\%$, $2.9\%$, $3.6\%$ on
DomainNet-LDS, DomainNet-ILDS, ID-TwO. We hope our intuitive yet effective
method will serve as a useful new direction and increase the robustness of DA
in real scenarios. Code will be available at anonymous link:
https://github.com/pikachusocute/InstaPBM-RobustDA.
Related papers
- Gradual Domain Adaptation: Theory and Algorithms [15.278170387810409]
Unsupervised domain adaptation (UDA) adapts a model from a labeled source domain to an unlabeled target domain in a one-off way.
In this work, we first theoretically analyze gradual self-training, a popular GDA algorithm, and provide a significantly improved generalization bound.
We propose $textbfG$enerative Gradual D$textbfO$main $textbfA$daptation with Optimal $textbfT$ransport (GOAT)
arXiv Detail & Related papers (2023-10-20T23:02:08Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Towards Corruption-Agnostic Robust Domain Adaptation [76.66523954277945]
We investigate a new task, Corruption-agnostic Robust Domain Adaptation (CRDA): to be accurate on original data and robust against unavailable-for-training corruptions on target domains.
We propose a new approach based on two technical insights into CRDA: 1) an easy-to-plug module called Domain Discrepancy Generator (DDG) that generates samples that enlarge domain discrepancy to mimic unpredictable corruptions; 2) a simple but effective teacher-student scheme with contrastive loss to enhance the constraints on target domains.
arXiv Detail & Related papers (2021-04-21T06:27:48Z) - ConDA: Continual Unsupervised Domain Adaptation [0.0]
Domain Adaptation (DA) techniques are important for overcoming the domain shift between the source domain used for training and the target domain where testing takes place.
Current DA methods assume that the entire target domain is available during adaptation, which may not hold in practice.
This paper considers a more realistic scenario, where target data become available in smaller batches and adaptation on the entire target domain is not feasible.
arXiv Detail & Related papers (2021-03-19T23:20:41Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation [26.929772844572213]
We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain.
We train the source-dominant model and the target-dominant model that have complementary characteristics.
Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain.
arXiv Detail & Related papers (2020-11-18T11:58:19Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Maximum Density Divergence for Domain Adaptation [0.0]
Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain.
We propose a new domain adaptation method named Adversarial Tight Match (ATM) which enjoys the benefits of both adversarial training and metric learning.
arXiv Detail & Related papers (2020-04-27T07:35:06Z) - Online Meta-Learning for Multi-Source and Semi-Supervised Domain
Adaptation [4.1799778475823315]
We propose a framework to enhance performance by meta-learning the initial conditions of existing DA algorithms.
We present variants for both multi-source unsupervised domain adaptation (MSDA), and semi-supervised domain adaptation (SSDA)
We achieve state of the art results on several DA benchmarks including the largest scale DomainNet.
arXiv Detail & Related papers (2020-04-09T07:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.