A Framework for Supervised Heterogeneous Transfer Learning using Dynamic
Distribution Adaptation and Manifold Regularization
- URL: http://arxiv.org/abs/2108.12293v1
- Date: Fri, 27 Aug 2021 14:00:09 GMT
- Title: A Framework for Supervised Heterogeneous Transfer Learning using Dynamic
Distribution Adaptation and Manifold Regularization
- Authors: Md Geaur Rahman and Md Zahidul Islam
- Abstract summary: We present a framework called TLF that builds a classifier for the target domain having only few labeled training records.
We handle distribution divergence by simultaneously optimizing the structural risk functional, joint distributions between domains, and the manifold consistency underlying marginal distributions.
We evaluate TLF on seven publicly available natural datasets and compare the performance of TLF against the performance of eleven state-of-the-art techniques.
- Score: 3.476077954140922
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transfer learning aims to learn classifiers for a target domain by
transferring knowledge from a source domain. However, due to two main issues:
feature discrepancy and distribution divergence, transfer learning can be a
very difficult problem in practice. In this paper, we present a framework
called TLF that builds a classifier for the target domain having only few
labeled training records by transferring knowledge from the source domain
having many labeled records. While existing methods often focus on one issue
and leave the other one for the further work, TLF is capable of handling both
issues simultaneously. In TLF, we alleviate feature discrepancy by identifying
shared label distributions that act as the pivots to bridge the domains. We
handle distribution divergence by simultaneously optimizing the structural risk
functional, joint distributions between domains, and the manifold consistency
underlying marginal distributions. Moreover, for the manifold consistency we
exploit its intrinsic properties by identifying k nearest neighbors of a
record, where the value of k is determined automatically in TLF. Furthermore,
since negative transfer is not desired, we consider only the source records
that are belonging to the source pivots during the knowledge transfer. We
evaluate TLF on seven publicly available natural datasets and compare the
performance of TLF against the performance of eleven state-of-the-art
techniques. We also evaluate the effectiveness of TLF in some challenging
situations. Our experimental results, including statistical sign test and
Nemenyi test analyses, indicate a clear superiority of the proposed framework
over the state-of-the-art techniques.
Related papers
- Balancing Discriminability and Transferability for Source-Free Domain
Adaptation [55.143687986324935]
Conventional domain adaptation (DA) techniques aim to improve domain transferability by learning domain-invariant representations.
The requirement of simultaneous access to labeled source and unlabeled target renders them unsuitable for the challenging source-free DA setting.
We derive novel insights to show that a mixup between original and corresponding translated generic samples enhances the discriminability-transferability trade-off.
arXiv Detail & Related papers (2022-06-16T09:06:22Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Interventional Domain Adaptation [81.0692660794765]
Domain adaptation (DA) aims to transfer discriminative features learned from source domain to target domain.
Standard domain-invariance learning suffers from spurious correlations and incorrectly transfers the source-specifics.
We create counterfactual features that distinguish the domain-specifics from domain-sharable part.
arXiv Detail & Related papers (2020-11-07T09:53:13Z) - Towards Fair Knowledge Transfer for Imbalanced Domain Adaptation [61.317911756566126]
We propose a Towards Fair Knowledge Transfer framework to handle the fairness challenge in imbalanced cross-domain learning.
Specifically, a novel cross-domain mixup generation is exploited to augment the minority source set with target information to enhance fairness.
Our model significantly improves over 20% on two benchmarks in terms of the overall accuracy.
arXiv Detail & Related papers (2020-10-23T06:29:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.