Measuring and Harnessing Transference in Multi-Task Learning
- URL: http://arxiv.org/abs/2010.15413v3
- Date: Fri, 10 Sep 2021 06:55:37 GMT
- Title: Measuring and Harnessing Transference in Multi-Task Learning
- Authors: Christopher Fifty, Ehsan Amid, Zhe Zhao, Tianhe Yu, Rohan Anil,
Chelsea Finn
- Abstract summary: Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We analyze the dynamics of information transfer, or transference, across tasks throughout training.
- Score: 58.48659733262734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task learning can leverage information learned by one task to benefit
the training of other tasks. Despite this capacity, naive formulations often
degrade performance and in particular, identifying the tasks that would benefit
from co-training remains a challenging design question. In this paper, we
analyze the dynamics of information transfer, or transference, across tasks
throughout training. Specifically, we develop a similarity measure that can
quantify transference among tasks and use this quantity to both better
understand the optimization dynamics of multi-task learning as well as improve
overall learning performance. In the latter case, we propose two methods to
leverage our transference metric. The first operates at a macro-level by
selecting which tasks should train together while the second functions at a
micro-level by determining how to combine task gradients at each training step.
We find these methods can lead to significant improvement over prior work on
three supervised multi-task learning benchmarks and one multi-task
reinforcement learning paradigm.
Related papers
- Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning [79.07658065326592]
Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning.
We provide novel multitask confidence intervals in the challenging setting when neither the similarity between tasks nor the tasks' features are available to the learner.
We propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance.
arXiv Detail & Related papers (2023-08-03T13:08:09Z) - Efficient Multi-Task and Transfer Reinforcement Learning with
Parameter-Compositional Framework [44.43196786555784]
We investigate the potential of improving multi-task training and leveraging it for transferring in the reinforcement learning setting.
We propose a transferring approach with a parameter-compositional formulation.
Experimental results demonstrate that the proposed approach can have improved performance in the multi-task training stage.
arXiv Detail & Related papers (2023-06-02T18:00:33Z) - Leveraging convergence behavior to balance conflicting tasks in
multi-task learning [3.6212652499950138]
Multi-Task Learning uses correlated tasks to improve performance generalization.
Tasks often conflict with each other, which makes it challenging to define how the gradients of multiple tasks should be combined.
We propose a method that takes into account temporal behaviour of the gradients to create a dynamic bias that adjust the importance of each task during the backpropagation.
arXiv Detail & Related papers (2022-04-14T01:52:34Z) - A Survey of Multi-task Learning in Natural Language Processing:
Regarding Task Relatedness and Training Methods [17.094426577723507]
Multi-task learning (MTL) has become increasingly popular in natural language processing (NLP)
It improves the performance of related tasks by exploiting their commonalities and differences.
It is still not understood very well how multi-task learning can be implemented based on the relatedness of training tasks.
arXiv Detail & Related papers (2022-04-07T15:22:19Z) - Transfer Learning in Conversational Analysis through Reusing
Preprocessing Data as Supervisors [52.37504333689262]
Using noisy labels in single-task learning increases the risk of over-fitting.
Auxiliary tasks could improve the performance of the primary task learning during the same training.
arXiv Detail & Related papers (2021-12-02T08:40:42Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.