Understanding Knowledge Transferability for Transfer Learning: A Survey
- URL: http://arxiv.org/abs/2507.03175v1
- Date: Thu, 03 Jul 2025 21:06:30 GMT
- Title: Understanding Knowledge Transferability for Transfer Learning: A Survey
- Authors: Haohua Wang, Jingge Wang, Zijie Zhao, Yang Tan, Yanru Wu, Hanbing Liu, Jingyun Yang, Enming Zhang, Xiangyu Chen, Zhengze Rong, Shanxin Guo, Yang Li,
- Abstract summary: Transfer learning enables the transfer of knowledge from a source task to improve performance on a target task.<n>Despite its widespread use, how to reliably assess the transferability of knowledge remains a challenge.<n>We provide a unified taxonomy of transferability metrics, categorizing them based on transferable knowledge types and measurement.
- Score: 9.351787368829013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning has become an essential paradigm in artificial intelligence, enabling the transfer of knowledge from a source task to improve performance on a target task. This approach, particularly through techniques such as pretraining and fine-tuning, has seen significant success in fields like computer vision and natural language processing. However, despite its widespread use, how to reliably assess the transferability of knowledge remains a challenge. Understanding the theoretical underpinnings of each transferability metric is critical for ensuring the success of transfer learning. In this survey, we provide a unified taxonomy of transferability metrics, categorizing them based on transferable knowledge types and measurement granularity. This work examines the various metrics developed to evaluate the potential of source knowledge for transfer learning and their applicability across different learning paradigms emphasizing the need for careful selection of these metrics. By offering insights into how different metrics work under varying conditions, this survey aims to guide researchers and practitioners in selecting the most appropriate metric for specific applications, contributing to more efficient, reliable, and trustworthy AI systems. Finally, we discuss some open challenges in this field and propose future research directions to further advance the application of transferability metrics in trustworthy transfer learning.
Related papers
- FAST: Similarity-based Knowledge Transfer for Efficient Policy Learning [57.4737157531239]
Transfer Learning offers the potential to accelerate learning by transferring knowledge across tasks.<n>It faces critical challenges such as negative transfer, domain adaptation and inefficiency in selecting solid source policies.<n>In this work we challenge the key issues in TL to improve knowledge transfer, agents performance across tasks and reduce computational costs.
arXiv Detail & Related papers (2025-07-27T22:21:53Z) - Trustworthy Transfer Learning: A Survey [42.8355039035467]
We understand transfer learning from the perspectives of knowledge transferability and trustworthiness.<n>This paper provides a comprehensive review of trustworthy transfer learning from various aspects.<n>We highlight the open questions and future directions for understanding transfer learning in a reliable and trustworthy manner.
arXiv Detail & Related papers (2024-12-18T18:03:51Z) - Bayesian Transfer Learning [13.983016833412307]
"Transfer learning" seeks to improve inference and/or predictive accuracy on a domain of interest by leveraging data from related domains.
This article highlights Bayesian approaches to transfer learning, which have received relatively limited attention despite their innate compatibility with the notion of drawing upon prior knowledge to guide new learning tasks.
We discuss how these methods address the problem of finding the optimal information to transfer between domains, which is a central question in transfer learning.
arXiv Detail & Related papers (2023-12-20T23:38:17Z) - Similarity-based Knowledge Transfer for Cross-Domain Reinforcement
Learning [3.3148826359547523]
We develop a semi-supervised alignment loss to match different spaces with a set of encoder-decoders.
In comparison to prior works, our method does not require data to be aligned, paired or collected by expert policies.
arXiv Detail & Related papers (2023-12-05T19:26:01Z) - Feasibility of Transfer Learning: A Mathematical Framework [4.530876736231948]
It begins by establishing the necessary mathematical concepts and constructing a mathematical framework for transfer learning.
It then identifies and formulates the three-step transfer learning procedure as an optimization problem, allowing for the resolution of the feasibility issue.
arXiv Detail & Related papers (2023-05-22T12:44:38Z) - Transferability Estimation Based On Principal Gradient Expectation [68.97403769157117]
Cross-task transferability is compatible with transferred results while keeping self-consistency.
Existing transferability metrics are estimated on the particular model by conversing source and target tasks.
We propose Principal Gradient Expectation (PGE), a simple yet effective method for assessing transferability across tasks.
arXiv Detail & Related papers (2022-11-29T15:33:02Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - A Taxonomy of Similarity Metrics for Markov Decision Processes [62.997667081978825]
In recent years, transfer learning has succeeded in making Reinforcement Learning (RL) algorithms more efficient.
In this paper, we propose a categorization of these metrics and analyze the definitions of similarity proposed so far.
arXiv Detail & Related papers (2021-03-08T12:36:42Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Uniform Priors for Data-Efficient Transfer [65.086680950871]
We show that features that are most transferable have high uniformity in the embedding space.
We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data.
arXiv Detail & Related papers (2020-06-30T04:39:36Z) - Uncovering the Connections Between Adversarial Transferability and
Knowledge Transferability [27.65302656389911]
We analyze and demonstrate the connections between knowledge transferability and adversarial transferability.
Our theoretical studies show that adversarial transferability indicates knowledge transferability and vice versa.
We conduct extensive experiments for different scenarios on diverse datasets, showing a positive correlation between adversarial transferability and knowledge transferability.
arXiv Detail & Related papers (2020-06-25T16:04:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.