Gradient Surgery for Multi-Task Learning
- URL: http://arxiv.org/abs/2001.06782v4
- Date: Tue, 22 Dec 2020 00:35:46 GMT
- Title: Gradient Surgery for Multi-Task Learning
- Authors: Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol
Hausman, Chelsea Finn
- Abstract summary: Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks.
The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood.
We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient.
- Score: 119.675492088251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep learning and deep reinforcement learning (RL) systems have
demonstrated impressive results in domains such as image classification, game
playing, and robotic control, data efficiency remains a major challenge.
Multi-task learning has emerged as a promising approach for sharing structure
across multiple tasks to enable more efficient learning. However, the
multi-task setting presents a number of optimization challenges, making it
difficult to realize large efficiency gains compared to learning tasks
independently. The reasons why multi-task learning is so challenging compared
to single-task learning are not fully understood. In this work, we identify a
set of three conditions of the multi-task optimization landscape that cause
detrimental gradient interference, and develop a simple yet general approach
for avoiding such interference between task gradients. We propose a form of
gradient surgery that projects a task's gradient onto the normal plane of the
gradient of any other task that has a conflicting gradient. On a series of
challenging multi-task supervised and multi-task RL problems, this approach
leads to substantial gains in efficiency and performance. Further, it is
model-agnostic and can be combined with previously-proposed multi-task
architectures for enhanced performance.
Related papers
- An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale
Multitask Learning Systems [4.675744559395732]
Multitask learning assumes that models capable of learning from multiple tasks can achieve better quality and efficiency via knowledge transfer.
State of the art ML models rely on high customization for each task and leverage size and data scale rather than scaling the number of tasks.
We propose an evolutionary method that can generate a large scale multitask model and can support the dynamic and continuous addition of new tasks.
arXiv Detail & Related papers (2022-05-25T13:10:47Z) - Conflict-Averse Gradient Descent for Multi-task Learning [56.379937772617]
A major challenge in optimizing a multi-task model is the conflicting gradients.
We introduce Conflict-Averse Gradient descent (CAGrad) which minimizes the average loss function.
CAGrad balances the objectives automatically and still provably converges to a minimum over the average loss.
arXiv Detail & Related papers (2021-10-26T22:03:51Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z) - Small Towers Make Big Differences [59.243296878666285]
Multi-task learning aims at solving multiple machine learning tasks at the same time.
A good solution to a multi-task learning problem should be generalizable in addition to being Pareto optimal.
We propose a method of under- parameterized self-auxiliaries for multi-task models to achieve the best of both worlds.
arXiv Detail & Related papers (2020-08-13T10:45:31Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Knowledge Distillation for Multi-task Learning [38.20005345733544]
Multi-task learning (MTL) is to learn one single model that performs multiple tasks for achieving good performance on all tasks and lower cost on computation.
Learning such a model requires to jointly optimize losses of a set of tasks with different difficulty levels, magnitudes, and characteristics.
We propose a knowledge distillation based method in this work to address the imbalance problem in multi-task learning.
arXiv Detail & Related papers (2020-07-14T08:02:42Z) - Multitask Learning with Single Gradient Step Update for Task Balancing [4.330814031477772]
We propose an algorithm to balance between tasks at the gradient level by applying gradient-based meta-learning to multitask learning.
We apply the proposed method to various multitask computer vision problems and achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-05-20T08:34:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.