Risk of Transfer Learning and its Applications in Finance
- URL: http://arxiv.org/abs/2311.03283v1
- Date: Mon, 6 Nov 2023 17:23:54 GMT
- Title: Risk of Transfer Learning and its Applications in Finance
- Authors: Haoyang Cao, Haotian Gu, Xin Guo and Mathieu Rosenbaum
- Abstract summary: We propose a novel concept of transfer risk and analyze its properties to evaluate transferability of transfer learning.
Numerical results demonstrate a strong correlation between transfer risk and overall transfer learning performance.
- Score: 2.966069495345018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning is an emerging and popular paradigm for utilizing existing
knowledge from previous learning tasks to improve the performance of new ones.
In this paper, we propose a novel concept of transfer risk and and analyze its
properties to evaluate transferability of transfer learning. We apply transfer
learning techniques and this concept of transfer risk to stock return
prediction and portfolio optimization problems. Numerical results demonstrate a
strong correlation between transfer risk and overall transfer learning
performance, where transfer risk provides a computationally efficient way to
identify appropriate source tasks in transfer learning, including
cross-continent, cross-sector, and cross-frequency transfer for portfolio
optimization.
Related papers
- Transfer Learning for Portfolio Optimization [4.031388559887924]
We introduce a novel concept called "transfer risk", within the optimization framework of transfer learning.
A series of numerical experiments are conducted from three categories: cross-continent transfer, cross-sector transfer, and cross-frequency transfer.
arXiv Detail & Related papers (2023-07-25T14:48:54Z) - Feasibility of Transfer Learning: A Mathematical Framework [4.530876736231948]
It begins by establishing the necessary mathematical concepts and constructing a mathematical framework for transfer learning.
It then identifies and formulates the three-step transfer learning procedure as an optimization problem, allowing for the resolution of the feasibility issue.
arXiv Detail & Related papers (2023-05-22T12:44:38Z) - Feasibility and Transferability of Transfer Learning: A Mathematical
Framework [4.031388559887924]
We build for the first time a mathematical framework for the general procedure of transfer learning.
We also propose a novel concept of transfer risk to evaluate transferability of transfer learning.
arXiv Detail & Related papers (2023-01-27T05:54:53Z) - Transferred Q-learning [79.79659145328856]
We consider $Q$-learning with knowledge transfer, using samples from a target reinforcement learning (RL) task as well as source samples from different but related RL tasks.
We propose transfer learning algorithms for both batch and online $Q$-learning with offline source studies.
arXiv Detail & Related papers (2022-02-09T20:08:19Z) - On Transferability of Prompt Tuning for Natural Language Understanding [63.29235426932978]
We investigate the transferability of soft prompts across different tasks and models.
We find that trained soft prompts can well transfer to similar tasks and initialize PT for them to accelerate training and improve performance.
Our findings show that improving PT with knowledge transfer is possible and promising, while prompts' cross-task transferability is generally better than the cross-model transferability.
arXiv Detail & Related papers (2021-11-12T13:39:28Z) - Frustratingly Easy Transferability Estimation [64.42879325144439]
We propose a simple, efficient, and effective transferability measure named TransRate.
TransRate measures the transferability as the mutual information between the features of target examples extracted by a pre-trained model and labels of them.
Despite its extraordinary simplicity in 10 lines of codes, TransRate performs remarkably well in extensive evaluations on 22 pre-trained models and 16 downstream tasks.
arXiv Detail & Related papers (2021-06-17T10:27:52Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Uncovering the Connections Between Adversarial Transferability and
Knowledge Transferability [27.65302656389911]
We analyze and demonstrate the connections between knowledge transferability and adversarial transferability.
Our theoretical studies show that adversarial transferability indicates knowledge transferability and vice versa.
We conduct extensive experiments for different scenarios on diverse datasets, showing a positive correlation between adversarial transferability and knowledge transferability.
arXiv Detail & Related papers (2020-06-25T16:04:47Z) - Limits of Transfer Learning [0.0]
We show the need to carefully select which sets of information to transfer and the need for dependence between transferred information and target problems.
These results build on the algorithmic search framework for machine learning, allowing the results to apply to a wide range of learning problems using transfer.
arXiv Detail & Related papers (2020-06-23T01:48:23Z) - Transfer Heterogeneous Knowledge Among Peer-to-Peer Teammates: A Model
Distillation Approach [55.83558520598304]
We propose a brand new solution to reuse experiences and transfer value functions among multiple students via model distillation.
We also describe how to design an efficient communication protocol to exploit heterogeneous knowledge.
Our proposed framework, namely Learning and Teaching Categorical Reinforcement, shows promising performance on stabilizing and accelerating learning progress.
arXiv Detail & Related papers (2020-02-06T11:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.