A Class-aware Optimal Transport Approach with Higher-Order Moment
Matching for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2401.15952v1
- Date: Mon, 29 Jan 2024 08:27:31 GMT
- Title: A Class-aware Optimal Transport Approach with Higher-Order Moment
Matching for Unsupervised Domain Adaptation
- Authors: Tuan Nguyen, Van Nguyen, Trung Le, He Zhao, Quan Hung Tran, Dinh Phung
- Abstract summary: Unsupervised domain adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We introduce a novel approach called class-aware optimal transport (OT), which measures the OT distance between a distribution over the source class-conditional distributions.
- Score: 33.712557972990034
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation (UDA) aims to transfer knowledge from a
labeled source domain to an unlabeled target domain. In this paper, we
introduce a novel approach called class-aware optimal transport (OT), which
measures the OT distance between a distribution over the source
class-conditional distributions and a mixture of source and target data
distribution. Our class-aware OT leverages a cost function that determines the
matching extent between a given data example and a source class-conditional
distribution. By optimizing this cost function, we find the optimal matching
between target examples and source class-conditional distributions, effectively
addressing the data and label shifts that occur between the two domains. To
handle the class-aware OT efficiently, we propose an amortization solution that
employs deep neural networks to formulate the transportation probabilities and
the cost function. Additionally, we propose minimizing class-aware Higher-order
Moment Matching (HMM) to align the corresponding class regions on the source
and target domains. The class-aware HMM component offers an economical
computational approach for accurately evaluating the HMM distance between the
two distributions. Extensive experiments on benchmark datasets demonstrate that
our proposed method significantly outperforms existing state-of-the-art
baselines.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Conditional Support Alignment for Domain Adaptation with Label Shift [8.819673391477034]
Unlabelled domain adaptation (UDA) refers to a domain adaptation framework in which a learning model is trained based on labeled samples on the source domain and unsupervised ones in the target domain.
We propose a novel conditional adversarial support alignment (CASA) whose aim is to minimize the conditional symmetric support divergence between the source's and target domain's feature representation distributions.
arXiv Detail & Related papers (2023-05-29T05:20:18Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.