Trustworthy Transfer Learning: A Survey
- URL: http://arxiv.org/abs/2412.14116v1
- Date: Wed, 18 Dec 2024 18:03:51 GMT
- Title: Trustworthy Transfer Learning: A Survey
- Authors: Jun Wu, Jingrui He,
- Abstract summary: We understand transfer learning from the perspectives of knowledge transferability and trustworthiness.
This paper provides a comprehensive review of trustworthy transfer learning from various aspects.
We highlight the open questions and future directions for understanding transfer learning in a reliable and trustworthy manner.
- Score: 42.8355039035467
- License:
- Abstract: Transfer learning aims to transfer knowledge or information from a source domain to a relevant target domain. In this paper, we understand transfer learning from the perspectives of knowledge transferability and trustworthiness. This involves two research questions: How is knowledge transferability quantitatively measured and enhanced across domains? Can we trust the transferred knowledge in the transfer learning process? To answer these questions, this paper provides a comprehensive review of trustworthy transfer learning from various aspects, including problem definitions, theoretical analysis, empirical algorithms, and real-world applications. Specifically, we summarize recent theories and algorithms for understanding knowledge transferability under (within-domain) IID and non-IID assumptions. In addition to knowledge transferability, we review the impact of trustworthiness on transfer learning, e.g., whether the transferred knowledge is adversarially robust or algorithmically fair, how to transfer the knowledge under privacy-preserving constraints, etc. Beyond discussing the current advancements, we highlight the open questions and future directions for understanding transfer learning in a reliable and trustworthy manner.
Related papers
- Bayesian Transfer Learning [13.983016833412307]
"Transfer learning" seeks to improve inference and/or predictive accuracy on a domain of interest by leveraging data from related domains.
This article highlights Bayesian approaches to transfer learning, which have received relatively limited attention despite their innate compatibility with the notion of drawing upon prior knowledge to guide new learning tasks.
We discuss how these methods address the problem of finding the optimal information to transfer between domains, which is a central question in transfer learning.
arXiv Detail & Related papers (2023-12-20T23:38:17Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Graph Enabled Cross-Domain Knowledge Transfer [1.52292571922932]
Cross-Domain Knowledge Transfer is an approach to mitigate the gap between good representation learning and the scarce knowledge in the domain of interest.
From the machine learning perspective, the paradigm of semi-supervised learning takes advantage of large amount of data without ground truth and achieves impressive learning performance improvement.
arXiv Detail & Related papers (2023-04-07T03:02:10Z) - A Theory for Knowledge Transfer in Continual Learning [7.056222499095849]
Continual learning of tasks is an active area in deep neural networks.
Recent work has investigated forward knowledge transfer to new tasks.
We present a theory for knowledge transfer in continual supervised learning.
arXiv Detail & Related papers (2022-08-14T22:28:26Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Kformer: Knowledge Injection in Transformer Feed-Forward Layers [107.71576133833148]
We propose a novel knowledge fusion model, namely Kformer, which incorporates external knowledge through the feed-forward layer in Transformer.
We empirically find that simply injecting knowledge into FFN can facilitate the pre-trained language model's ability and facilitate current knowledge fusion methods.
arXiv Detail & Related papers (2022-01-15T03:00:27Z) - KAT: A Knowledge Augmented Transformer for Vision-and-Language [56.716531169609915]
We propose a novel model - Knowledge Augmented Transformer (KAT) - which achieves a strong state-of-the-art result on the open-domain multimodal task of OK-VQA.
Our approach integrates implicit and explicit knowledge in an end to end encoder-decoder architecture, while still jointly reasoning over both knowledge sources during answer generation.
An additional benefit of explicit knowledge integration is seen in improved interpretability of model predictions in our analysis.
arXiv Detail & Related papers (2021-12-16T04:37:10Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Uncovering the Connections Between Adversarial Transferability and
Knowledge Transferability [27.65302656389911]
We analyze and demonstrate the connections between knowledge transferability and adversarial transferability.
Our theoretical studies show that adversarial transferability indicates knowledge transferability and vice versa.
We conduct extensive experiments for different scenarios on diverse datasets, showing a positive correlation between adversarial transferability and knowledge transferability.
arXiv Detail & Related papers (2020-06-25T16:04:47Z) - Limits of Transfer Learning [0.0]
We show the need to carefully select which sets of information to transfer and the need for dependence between transferred information and target problems.
These results build on the algorithmic search framework for machine learning, allowing the results to apply to a wide range of learning problems using transfer.
arXiv Detail & Related papers (2020-06-23T01:48:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.