A Foliated View of Transfer Learning
- URL: http://arxiv.org/abs/2008.00546v1
- Date: Sun, 2 Aug 2020 19:30:59 GMT
- Title: A Foliated View of Transfer Learning
- Authors: Janith Petangoda, Nick A. M. Monk and Marc Peter Deisenroth
- Abstract summary: Transfer learning considers a learning process where a new task is solved by transferring relevant knowledge from known solutions to related tasks.
While this has been studied experimentally, there lacks a foundational description of the transfer learning problem that exposes what related tasks are, and how they can be exploited.
We present a definition for relatedness between tasks and identify foliations as a mathematical framework to represent such relationships.
- Score: 13.71317837122096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning considers a learning process where a new task is solved by
transferring relevant knowledge from known solutions to related tasks. While
this has been studied experimentally, there lacks a foundational description of
the transfer learning problem that exposes what related tasks are, and how they
can be exploited. In this work, we present a definition for relatedness between
tasks and identify foliations as a mathematical framework to represent such
relationships.
Related papers
- Disentangling and Mitigating the Impact of Task Similarity for Continual Learning [1.3597551064547502]
Continual learning of partially similar tasks poses a challenge for artificial neural networks.
High input feature similarity coupled with low readout similarity is catastrophic for both knowledge transfer and retention.
Weight regularization based on the Fisher information metric significantly improves retention, regardless of task similarity.
arXiv Detail & Related papers (2024-05-30T16:40:07Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - The Role of Exploration for Task Transfer in Reinforcement Learning [8.817381809671804]
We re-examine the exploration--exploitation trade-off in the context of transfer learning.
In this work, we review reinforcement learning exploration methods, define a taxonomy with which to organize them, analyze these methods' differences in the context of task transfer, and suggest avenues for future investigation.
arXiv Detail & Related papers (2022-10-11T01:23:21Z) - Multi-Source Transfer Learning for Deep Model-Based Reinforcement
Learning [0.6445605125467572]
A crucial challenge in reinforcement learning is to reduce the number of interactions with the environment that an agent requires to master a given task.
Transfer learning proposes to address this issue by re-using knowledge from previously learned tasks.
The goal of this paper is to address these issues with modular multi-source transfer learning techniques.
arXiv Detail & Related papers (2022-05-28T12:04:52Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - On the relationship between disentanglement and multi-task learning [62.997667081978825]
We take a closer look at the relationship between disentanglement and multi-task learning based on hard parameter sharing.
We show that disentanglement appears naturally during the process of multi-task neural network training.
arXiv Detail & Related papers (2021-10-07T14:35:34Z) - Learning to Transfer: A Foliated Theory [18.58482811176484]
Learning to transfer considers learning solutions to tasks in a such a way that relevant knowledge can be transferred to new, related tasks.
This is important for general learning, as well as for improving the efficiency of the learning process.
We introduce a framework using the differential geometric theory of foliations that provides such a foundation.
arXiv Detail & Related papers (2021-07-22T15:46:45Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - A Taxonomy of Similarity Metrics for Markov Decision Processes [62.997667081978825]
In recent years, transfer learning has succeeded in making Reinforcement Learning (RL) algorithms more efficient.
In this paper, we propose a categorization of these metrics and analyze the definitions of similarity proposed so far.
arXiv Detail & Related papers (2021-03-08T12:36:42Z) - Phase Transitions in Transfer Learning for High-Dimensional Perceptrons [12.614901374282868]
Transfer learning seeks to improve the generalization performance of a target task by exploiting knowledge learned from a related source task.
The latter question is related to the so-called negative transfer phenomenon, where the transferred source information actually reduces the generalization performance of the target task.
We present a theoretical analysis of transfer learning by studying a pair of related perceptron learning tasks.
arXiv Detail & Related papers (2021-01-06T08:29:22Z) - Auxiliary Learning by Implicit Differentiation [54.92146615836611]
Training neural networks with auxiliary tasks is a common practice for improving the performance on a main task of interest.
Here, we propose a novel framework, AuxiLearn, that targets both challenges based on implicit differentiation.
First, when useful auxiliaries are known, we propose learning a network that combines all losses into a single coherent objective function.
Second, when no useful auxiliary task is known, we describe how to learn a network that generates a meaningful, novel auxiliary task.
arXiv Detail & Related papers (2020-06-22T19:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.