Metric-Learning-Assisted Domain Adaptation
- URL: http://arxiv.org/abs/2004.10963v3
- Date: Thu, 11 Jun 2020 09:41:08 GMT
- Title: Metric-Learning-Assisted Domain Adaptation
- Authors: Yueming Yin, Zhen Yang, Haifeng Hu and Xiaofu Wu
- Abstract summary: Many existing domain alignment methods assume that a low source risk, together with the alignment of distributions of source and target, means a low target risk.
We propose a novel metric-learning-assisted domain adaptation (MLA-DA) method, which employs a novel triplet loss for helping better feature alignment.
- Score: 18.62119154143642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain alignment (DA) has been widely used in unsupervised domain adaptation.
Many existing DA methods assume that a low source risk, together with the
alignment of distributions of source and target, means a low target risk. In
this paper, we show that this does not always hold. We thus propose a novel
metric-learning-assisted domain adaptation (MLA-DA) method, which employs a
novel triplet loss for helping better feature alignment. We explore the
relationship between the second largest probability of a target sample's
prediction and its distance to the decision boundary. Based on the
relationship, we propose a novel mechanism to adaptively adjust the margin in
the triplet loss according to target predictions. Experimental results show
that the use of proposed triplet loss can achieve clearly better results. We
also demonstrate the performance improvement of MLA-DA on all four standard
benchmarks compared with the state-of-the-art unsupervised domain adaptation
methods. Furthermore, MLA-DA shows stable performance in robust experiments.
Related papers
- Align, Minimize and Diversify: A Source-Free Unsupervised Domain Adaptation Method for Handwritten Text Recognition [11.080302144256164]
The Align, Minimize and Diversify (AMD) method is a Source-Free Unsupervised Domain Adaptation approach for Handwritten Text Recognition (HTR)
Our method explicitly eliminates the need to revisit the source data during adaptation by incorporating three distinct regularization terms.
Experimental results from several benchmarks demonstrated the effectiveness and robustness of AMD, showing it to be competitive and often outperforming DA methods in HTR.
arXiv Detail & Related papers (2024-04-28T17:50:58Z) - Adversarial Reweighting with $α$-Power Maximization for Domain Adaptation [56.859005008344276]
We propose a novel approach, dubbed Adversarial Reweighting with $alpha$-Power Maximization (ARPM)
In ARPM, we propose a novel adversarial reweighting model that adversarially learns to reweight source domain data to identify source-private class samples.
We show that our method is superior to recent PDA methods.
arXiv Detail & Related papers (2024-04-26T09:29:55Z) - Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation
of Prediction Rationale [53.152460508207184]
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
This paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis.
To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning.
arXiv Detail & Related papers (2024-02-02T05:53:22Z) - CAusal and collaborative proxy-tasKs lEarning for Semi-Supervised Domain
Adaptation [20.589323508870592]
Semi-supervised domain adaptation (SSDA) adapts a learner to a new domain by effectively utilizing source domain data and a few labeled target samples.
We show that the proposed model significantly outperforms SOTA methods in terms of effectiveness and generalisability on SSDA datasets.
arXiv Detail & Related papers (2023-03-30T16:48:28Z) - Unsupervised Domain Adaptation Based on the Predictive Uncertainty of
Models [1.6498361958317636]
Unsupervised domain adaptation (UDA) aims to improve the prediction performance in the target domain under distribution shifts from the source domain.
We present a novel UDA method that learns domain-invariant features that minimize the domain divergence.
arXiv Detail & Related papers (2022-11-16T12:23:32Z) - Distributionally Robust Domain Adaptation [12.02023514105999]
Domain Adaptation (DA) has recently received significant attention due to its potential to adapt a learning model across source and target domains with mismatched distributions.
In this paper, we propose DRDA, a distributionally robust domain adaptation method.
arXiv Detail & Related papers (2022-10-30T17:29:22Z) - Domain-Specific Risk Minimization for Out-of-Distribution Generalization [104.17683265084757]
We first establish a generalization bound that explicitly considers the adaptivity gap.
We propose effective gap estimation methods for guiding the selection of a better hypothesis for the target.
The other method is minimizing the gap directly by adapting model parameters using online target samples.
arXiv Detail & Related papers (2022-08-18T06:42:49Z) - Multi-level Consistency Learning for Semi-supervised Domain Adaptation [85.90600060675632]
Semi-supervised domain adaptation (SSDA) aims to apply knowledge learned from a fully labeled source domain to a scarcely labeled target domain.
We propose a Multi-level Consistency Learning framework for SSDA.
arXiv Detail & Related papers (2022-05-09T06:41:18Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.