Bures Joint Distribution Alignment with Dynamic Margin for Unsupervised
Domain Adaptation
- URL: http://arxiv.org/abs/2203.06836v1
- Date: Mon, 14 Mar 2022 03:20:01 GMT
- Title: Bures Joint Distribution Alignment with Dynamic Margin for Unsupervised
Domain Adaptation
- Authors: Yong-Hui Liu, Chuan-Xian Ren, Xiao-Lin Xu, Ke-Kun Huang
- Abstract summary: Unsupervised domain adaptation (UDA) is one of the prominent tasks of transfer learning.
We propose a novel alignment loss term that minimizes the kernel Bures-Wasserstein distance between the joint distributions.
Experiments show that BJDA is very effective for the UDA tasks, as it outperforms state-of-the-art algorithms in most experimental settings.
- Score: 17.06364218327213
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) is one of the prominent tasks of
transfer learning, and it provides an effective approach to mitigate the
distribution shift between the labeled source domain and the unlabeled target
domain. Prior works mainly focus on aligning the marginal distributions or the
estimated class-conditional distributions. However, the joint dependency among
the feature and the label is crucial for the adaptation task and is not fully
exploited. To address this problem, we propose the Bures Joint Distribution
Alignment (BJDA) algorithm which directly models the joint distribution shift
based on the optimal transport theory in the infinite-dimensional kernel
spaces. Specifically, we propose a novel alignment loss term that minimizes the
kernel Bures-Wasserstein distance between the joint distributions. Technically,
BJDA can effectively capture the nonlinear structures underlying the data. In
addition, we introduce a dynamic margin in contrastive learning phase to
flexibly characterize the class separability and improve the discriminative
ability of representations. It also avoids the cross-validation procedure to
determine the margin parameter in traditional triplet loss based methods.
Extensive experiments show that BJDA is very effective for the UDA tasks, as it
outperforms state-of-the-art algorithms in most experimental settings. In
particular, BJDA improves the average accuracy of UDA tasks by 2.8% on
Adaptiope, 1.4% on Office-Caltech10, and 1.1% on ImageCLEF-DA.
Related papers
- Balanced Learning for Domain Adaptive Semantic Segmentation [37.70100155953312]
Unsupervised domain adaptation (UDA) for semantic segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.<n>Despite the effectiveness of self-training techniques in UDA, they struggle to learn each class in a balanced manner due to inherent class imbalance and distribution shift in both data and label space between domains.<n>We propose Balanced Learning for Domain Adaptation (BLDA), a novel approach to directly assess and alleviate class bias without requiring prior knowledge about the distribution shift.
arXiv Detail & Related papers (2025-12-07T15:21:22Z) - Source-Free Domain Adaptation for Medical Image Segmentation via
Prototype-Anchored Feature Alignment and Contrastive Learning [57.43322536718131]
We present a two-stage source-free domain adaptation (SFDA) framework for medical image segmentation.
In the prototype-anchored feature alignment stage, we first utilize the weights of the pre-trained pixel-wise classifier as source prototypes.
Then, we introduce the bi-directional transport to align the target features with class prototypes by minimizing its expected cost.
arXiv Detail & Related papers (2023-07-19T06:07:12Z) - Memory Consistent Unsupervised Off-the-Shelf Model Adaptation for
Source-Relaxed Medical Image Segmentation [13.260109561599904]
Unsupervised domain adaptation (UDA) has been a vital protocol for migrating information learned from a labeled source domain to an unlabeled heterogeneous target domain.
We propose "off-the-shelf (OS)" UDA (OSUDA), aimed at image segmentation, by adapting an OS segmentor trained in a source domain to a target domain, in the absence of source domain data in adaptation.
arXiv Detail & Related papers (2022-09-16T13:13:50Z) - Balancing Discriminability and Transferability for Source-Free Domain
Adaptation [55.143687986324935]
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations.
The requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting.
We derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off.
arXiv Detail & Related papers (2022-06-16T09:06:22Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Adapting Off-the-Shelf Source Segmenter for Target Medical Image
Segmentation [12.703234995718372]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to an unlabeled and unseen target domain.
Access to the source domain data at the adaptation stage is often limited, due to data storage or privacy issues.
We propose to adapt an off-the-shelf" segmentation model pre-trained in the source domain to the target domain.
arXiv Detail & Related papers (2021-06-23T16:16:55Z) - Margin Preserving Self-paced Contrastive Learning Towards Domain
Adaptation for Medical Image Segmentation [51.93711960601973]
We propose a novel margin preserving self-paced contrastive Learning model for cross-modal medical image segmentation.
With the guidance of progressively refined semantic prototypes, a novel margin preserving contrastive loss is proposed to boost the discriminability of embedded representation space.
Experiments on cross-modal cardiac segmentation tasks demonstrate that MPSCL significantly improves semantic segmentation performance.
arXiv Detail & Related papers (2021-03-15T15:23:10Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Discriminative Feature Alignment: Improving Transferability of
Unsupervised Domain Adaptation by Gaussian-guided Latent Alignment [27.671964294233756]
In this study, we focus on the unsupervised domain adaptation problem where an approximate inference model is to be learned from a labeled data domain.
The success of unsupervised domain adaptation largely relies on the cross-domain feature alignment.
We introduce a Gaussian-guided latent alignment approach to align the latent feature distributions of the two domains under the guidance of the prior distribution.
In such an indirect way, the distributions over the samples from the two domains will be constructed on a common feature space, i.e., the space of the prior.
arXiv Detail & Related papers (2020-06-23T05:33:54Z) - Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift [11.873435088539459]
This paper proposes a novel approach for unsupervised domain adaptation (UDA) with target shift.
The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching.
PS-VAEs inter-convert domain of each sample by a CycleGAN-based architecture while preserving its label-related content.
arXiv Detail & Related papers (2020-01-22T06:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.