Bayesian Transfer Learning
- URL: http://arxiv.org/abs/2312.13484v1
- Date: Wed, 20 Dec 2023 23:38:17 GMT
- Title: Bayesian Transfer Learning
- Authors: Piotr M. Suder, Jason Xu, David B. Dunson
- Abstract summary: "Transfer learning" seeks to improve inference and/or predictive accuracy on a domain of interest by leveraging data from related domains.
This article highlights Bayesian approaches to transfer learning, which have received relatively limited attention despite their innate compatibility with the notion of drawing upon prior knowledge to guide new learning tasks.
We discuss how these methods address the problem of finding the optimal information to transfer between domains, which is a central question in transfer learning.
- Score: 13.983016833412307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning is a burgeoning concept in statistical machine learning
that seeks to improve inference and/or predictive accuracy on a domain of
interest by leveraging data from related domains. While the term "transfer
learning" has garnered much recent interest, its foundational principles have
existed for years under various guises. Prior literature reviews in computer
science and electrical engineering have sought to bring these ideas into focus,
primarily surveying general methodologies and works from these disciplines.
This article highlights Bayesian approaches to transfer learning, which have
received relatively limited attention despite their innate compatibility with
the notion of drawing upon prior knowledge to guide new learning tasks. Our
survey encompasses a wide range of Bayesian transfer learning frameworks
applicable to a variety of practical settings. We discuss how these methods
address the problem of finding the optimal information to transfer between
domains, which is a central question in transfer learning. We illustrate the
utility of Bayesian transfer learning methods via a simulation study where we
compare performance against frequentist competitors.
Related papers
- A Recent Survey of Heterogeneous Transfer Learning [15.830786437956144]
heterogeneous transfer learning has become a vital strategy in various tasks.
We offer an extensive review of over 60 HTL methods, covering both data-based and model-based approaches.
We explore applications in natural language processing, computer vision, multimodal learning, and biomedicine.
arXiv Detail & Related papers (2023-10-12T16:19:58Z) - Feasibility of Transfer Learning: A Mathematical Framework [4.530876736231948]
It begins by establishing the necessary mathematical concepts and constructing a mathematical framework for transfer learning.
It then identifies and formulates the three-step transfer learning procedure as an optimization problem, allowing for the resolution of the feasibility issue.
arXiv Detail & Related papers (2023-05-22T12:44:38Z) - Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised
Domain Adaptation [70.85686267987744]
Unsupervised domain adaptation problems can transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose an interpretive model of unsupervised domain adaptation, as the first attempt to visually unveil the mystery of transferred knowledge.
Our method provides an intuitive explanation for the base model's predictions and unveils transfer knowledge by matching the image patches with the same semantics across both source and target domains.
arXiv Detail & Related papers (2023-03-04T03:02:12Z) - Bayesian Learning for Neural Networks: an algorithmic survey [95.42181254494287]
This self-contained survey engages and introduces readers to the principles and algorithms of Bayesian Learning for Neural Networks.
It provides an introduction to the topic from an accessible, practical-algorithmic perspective.
arXiv Detail & Related papers (2022-11-21T21:36:58Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - A Concise Review of Transfer Learning [1.5771347525430772]
Transfer learning aims to boost the performance of a target learner by applying another related source data.
Traditional machine learning and data mining techniques assume that the training and testing data lie from the same feature space and distribution.
arXiv Detail & Related papers (2021-04-05T20:34:55Z) - A Taxonomy of Similarity Metrics for Markov Decision Processes [62.997667081978825]
In recent years, transfer learning has succeeded in making Reinforcement Learning (RL) algorithms more efficient.
In this paper, we propose a categorization of these metrics and analyze the definitions of similarity proposed so far.
arXiv Detail & Related papers (2021-03-08T12:36:42Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Limits of Transfer Learning [0.0]
We show the need to carefully select which sets of information to transfer and the need for dependence between transferred information and target problems.
These results build on the algorithmic search framework for machine learning, allowing the results to apply to a wide range of learning problems using transfer.
arXiv Detail & Related papers (2020-06-23T01:48:23Z) - Inter- and Intra-domain Knowledge Transfer for Related Tasks in Deep
Character Recognition [2.320417845168326]
Pre-training a deep neural network on the ImageNet dataset is a common practice for training deep learning models.
The technique of pre-training on one task and then retraining on a new one is called transfer learning.
In this paper we analyse the effectiveness of using deep transfer learning for character recognition tasks.
arXiv Detail & Related papers (2020-01-02T14:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.