Distant Transfer Learning via Deep Random Walk
- URL: http://arxiv.org/abs/2006.07622v1
- Date: Sat, 13 Jun 2020 11:31:24 GMT
- Title: Distant Transfer Learning via Deep Random Walk
- Authors: Qiao Xiao and Yu Zhang
- Abstract summary: We study distant transfer learning by proposing a DeEp Random Walk basEd distaNt Transfer (DERWENT) method.
Based on sequences identified by the random walk technique on a data graph, the proposed DERWENT model enforces adjacent data points in a squence to be similar.
Empirical studies on several benchmark datasets demonstrate that the proposed DERWENT algorithm yields the state-of-the-art performance.
- Score: 7.957823585750222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning, which is to improve the learning performance in the target
domain by leveraging useful knowledge from the source domain, often requires
that those two domains are very close, which limits its application scope.
Recently, distant transfer learning has been studied to transfer knowledge
between two distant or even totally unrelated domains via auxiliary domains
that are usually unlabeled as a bridge in the spirit of human transitive
inference that it is possible to connect two completely unrelated concepts
together through gradual knowledge transfer. In this paper, we study distant
transfer learning by proposing a DeEp Random Walk basEd distaNt Transfer
(DERWENT) method. Different from existing distant transfer learning models that
implicitly identify the path of knowledge transfer between the source and
target instances through auxiliary instances, the proposed DERWENT model can
explicitly learn such paths via the deep random walk technique. Specifically,
based on sequences identified by the random walk technique on a data graph
where source and target data have no direct edges, the proposed DERWENT model
enforces adjacent data points in a squence to be similar, makes the ending data
point be represented by other data points in the same sequence, and considers
weighted training losses of source data. Empirical studies on several benchmark
datasets demonstrate that the proposed DERWENT algorithm yields the
state-of-the-art performance.
Related papers
- Contrastive Representation for Data Filtering in Cross-Domain Offline Reinforcement Learning [46.08671291758573]
Cross-domain offline reinforcement learning leverages source domain data with diverse transition dynamics to alleviate the data requirement for the target domain.
Existing methods address this problem by measuring the dynamics gap via domain classifiers while relying on the assumptions of the transferability of paired domains.
We propose a novel representation-based approach to measure the domain gap, where the representation is learned through a contrastive objective by sampling transitions from different domains.
arXiv Detail & Related papers (2024-05-10T02:21:42Z) - Direct Distillation between Different Domains [97.39470334253163]
We propose a new one-stage method dubbed Direct Distillation between Different Domains" (4Ds)
We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge.
We then build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network.
arXiv Detail & Related papers (2024-01-12T02:48:51Z) - Transfer RL via the Undo Maps Formalism [29.798971172941627]
Transferring knowledge across domains is one of the most fundamental problems in machine learning.
We propose TvD: transfer via distribution matching, a framework to transfer knowledge across interactive domains.
We show this objective leads to a policy update scheme reminiscent of imitation learning, and derive an efficient algorithm to implement it.
arXiv Detail & Related papers (2022-11-26T03:44:28Z) - Ranking Distance Calibration for Cross-Domain Few-Shot Learning [91.22458739205766]
Recent progress in few-shot learning promotes a more realistic cross-domain setting.
Due to the domain gap and disjoint label spaces between source and target datasets, their shared knowledge is extremely limited.
We employ a re-ranking process for calibrating a target distance matrix by discovering the reciprocal k-nearest neighbours within the task.
arXiv Detail & Related papers (2021-12-01T03:36:58Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - Dual-Teacher++: Exploiting Intra-domain and Inter-domain Knowledge with
Reliable Transfer for Cardiac Segmentation [69.09432302497116]
We propose a cutting-edge semi-supervised domain adaptation framework, namely Dual-Teacher++.
We design novel dual teacher models, including an inter-domain teacher model to explore cross-modality priors from source domain (e.g., MR) and an intra-domain teacher model to investigate the knowledge beneath unlabeled target domain.
In this way, the student model can obtain reliable dual-domain knowledge and yield improved performance on target domain data.
arXiv Detail & Related papers (2021-01-07T05:17:38Z) - Flexible deep transfer learning by separate feature embeddings and
manifold alignment [0.0]
Object recognition is a key enabler across industry and defense.
Unfortunately, algorithms trained on existing labeled datasets do not directly generalize to new data because the data distributions do not match.
We propose a novel deep learning framework that overcomes this limitation by learning separate feature extractions for each domain.
arXiv Detail & Related papers (2020-12-22T19:24:44Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Deep Adversarial Transition Learning using Cross-Grafted Generative
Stacks [3.756448228784421]
We present a novel "deep adversarial transition learning" (DATL) framework that bridges the domain gap.
We construct variational auto-encoders (VAEs) for the two domains, and form bidirectional transitions by cross-grafting the VAEs' decoder stacks.
generative adversarial networks (GAN) are employed for domain adaptation, mapping the target domain data to the known label space of the source domain.
arXiv Detail & Related papers (2020-09-25T04:25:27Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.