Maximum Density Divergence for Domain Adaptation
- URL: http://arxiv.org/abs/2004.12615v1
- Date: Mon, 27 Apr 2020 07:35:06 GMT
- Title: Maximum Density Divergence for Domain Adaptation
- Authors: Li Jingjing, Chen Erpeng, Ding Zhengming, Zhu Lei, Lu Ke, Shen Heng
Tao
- Abstract summary: Unsupervised domain adaptation addresses the problem of transferring knowledge from a well-labeled source domain to an unlabeled target domain.
We propose a new domain adaptation method named Adversarial Tight Match (ATM) which enjoys the benefits of both adversarial training and metric learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation addresses the problem of transferring
knowledge from a well-labeled source domain to an unlabeled target domain where
the two domains have distinctive data distributions. Thus, the essence of
domain adaptation is to mitigate the distribution divergence between the two
domains. The state-of-the-art methods practice this very idea by either
conducting adversarial training or minimizing a metric which defines the
distribution gaps. In this paper, we propose a new domain adaptation method
named Adversarial Tight Match (ATM) which enjoys the benefits of both
adversarial training and metric learning. Specifically, at first, we propose a
novel distance loss, named Maximum Density Divergence (MDD), to quantify the
distribution divergence. MDD minimizes the inter-domain divergence ("match" in
ATM) and maximizes the intra-class density ("tight" in ATM). Then, to address
the equilibrium challenge issue in adversarial domain adaptation, we consider
leveraging the proposed MDD into adversarial domain adaptation framework. At
last, we tailor the proposed MDD as a practical learning loss and report our
ATM. Both empirical evaluation and theoretical analysis are reported to verify
the effectiveness of the proposed method. The experimental results on four
benchmarks, both classical and large-scale, show that our method is able to
achieve new state-of-the-art performance on most evaluations. Codes and
datasets used in this paper are available at {\it github.com/lijin118/ATM}.
Related papers
- Domain Adaptation via Rebalanced Sub-domain Alignment [22.68115322836635]
Unsupervised domain adaptation (UDA) is a technique used to transfer knowledge from a labeled source domain to a related unlabeled target domain.
Many UDA methods have shown success in the past, but they often assume that the source and target domains must have identical class label distributions.
We propose a novel generalization bound that reweights source classification error by aligning source and target sub-domains.
arXiv Detail & Related papers (2023-02-03T21:30:40Z) - Unsupervised Domain Adaptation Based on the Predictive Uncertainty of
Models [1.6498361958317636]
Unsupervised domain adaptation (UDA) aims to improve the prediction performance in the target domain under distribution shifts from the source domain.
We present a novel UDA method that learns domain-invariant features that minimize the domain divergence.
arXiv Detail & Related papers (2022-11-16T12:23:32Z) - Constrained Maximum Cross-Domain Likelihood for Domain Generalization [14.91361835243516]
Domain generalization aims to learn a generalizable model on multiple source domains, which is expected to perform well on unseen test domains.
In this paper, we propose a novel domain generalization method, which minimizes the KL-divergence between posterior distributions from different domains.
Experiments on four standard benchmark datasets, i.e., Digits-DG, PACS, Office-Home and miniDomainNet, highlight the superior performance of our method.
arXiv Detail & Related papers (2022-10-09T03:41:02Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Semi-Supervised Adversarial Discriminative Domain Adaptation [18.15464889789663]
Domain adaptation is a potential method to train a powerful deep neural network, which can handle the absence of labeled data.
In this paper, we propose an improved adversarial domain adaptation method called Semi-Supervised Adversarial Discriminative Domain Adaptation (SADDA)
arXiv Detail & Related papers (2021-09-27T12:52:50Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Rethink Maximum Mean Discrepancy for Domain Adaptation [77.2560592127872]
This paper theoretically proves two essential facts: 1) minimizing the Maximum Mean Discrepancy equals to maximize the source and target intra-class distances respectively but jointly minimize their variance with some implicit weights, so that the feature discriminability degrades.
Experiments on several benchmark datasets not only prove the validity of theoretical results but also demonstrate that our approach could perform better than the comparative state-of-art methods substantially.
arXiv Detail & Related papers (2020-07-01T18:25:10Z) - Rethinking Distributional Matching Based Domain Adaptation [111.15106414932413]
Domain adaptation (DA) is a technique that transfers predictive models trained on a labeled source domain to an unlabeled target domain.
Most popular DA algorithms are based on distributional matching (DM)
In this paper, we first systematically analyze the limitations of DM based methods, and then build new benchmarks with more realistic domain shifts.
arXiv Detail & Related papers (2020-06-23T21:55:14Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.