Learning to Match Distributions for Domain Adaptation
- URL: http://arxiv.org/abs/2007.10791v3
- Date: Mon, 27 Jul 2020 01:44:38 GMT
- Title: Learning to Match Distributions for Domain Adaptation
- Authors: Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng,
Yiqiang Chen, Tie-Yan Liu
- Abstract summary: This paper proposes Learning to Match (L2M) to automatically learn the cross-domain distribution matching.
L2M reduces the inductive bias by using a meta-network to learn the distribution matching loss in a data-driven way.
Experiments on public datasets substantiate the superiority of L2M over SOTA methods.
- Score: 116.14838935146004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When the training and test data are from different distributions, domain
adaptation is needed to reduce dataset bias to improve the model's
generalization ability. Since it is difficult to directly match the
cross-domain joint distributions, existing methods tend to reduce the marginal
or conditional distribution divergence using predefined distances such as MMD
and adversarial-based discrepancies. However, it remains challenging to
determine which method is suitable for a given application since they are built
with certain priors or bias. Thus they may fail to uncover the underlying
relationship between transferable features and joint distributions. This paper
proposes Learning to Match (L2M) to automatically learn the cross-domain
distribution matching without relying on hand-crafted priors on the matching
loss. Instead, L2M reduces the inductive bias by using a meta-network to learn
the distribution matching loss in a data-driven way. L2M is a general framework
that unifies task-independent and human-designed matching features. We design a
novel optimization algorithm for this challenging objective with
self-supervised label propagation. Experiments on public datasets substantiate
the superiority of L2M over SOTA methods. Moreover, we apply L2M to transfer
from pneumonia to COVID-19 chest X-ray images with remarkable performance. L2M
can also be extended in other distribution matching applications where we show
in a trial experiment that L2M generates more realistic and sharper MNIST
samples.
Related papers
- Improving Distribution Alignment with Diversity-based Sampling [0.0]
Domain shifts are ubiquitous in machine learning, and can substantially degrade a model's performance when deployed to real-world data.
This paper proposes to improve these estimates by inducing diversity in each sampled minibatch.
It simultaneously balances the data and reduces the variance of the gradients, thereby enhancing the model's generalisation ability.
arXiv Detail & Related papers (2024-10-05T17:26:03Z) - Dataset Condensation with Latent Quantile Matching [5.466962214217334]
Current distribution matching (DM) based DC methods learn a synthesized dataset by matching the mean of the latent embeddings between the synthetic and the real outliers.
We propose Latent Quantile Matching (LQM) which matches the quantiles of the latent embeddings to minimize the goodness of fit test statistic between two distributions.
arXiv Detail & Related papers (2024-06-14T09:20:44Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Tackling Long-Tailed Category Distribution Under Domain Shifts [50.21255304847395]
Existing approaches cannot handle the scenario where both issues exist.
We designed three novel core functional blocks including Distribution Calibrated Classification Loss, Visual-Semantic Mapping and Semantic-Similarity Guided Augmentation.
Two new datasets were proposed for this problem, named AWA2-LTS and ImageNet-LTS.
arXiv Detail & Related papers (2022-07-20T19:07:46Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Unsupervised Domain Adaptation in the Dissimilarity Space for Person
Re-identification [11.045405206338486]
We propose a novel Dissimilarity-based Maximum Mean Discrepancy (D-MMD) loss for aligning pair-wise distances.
Empirical results with three challenging benchmark datasets show that the proposed D-MMD loss decreases as source and domain distributions become more similar.
arXiv Detail & Related papers (2020-07-27T22:10:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.