Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference
- URL: http://arxiv.org/abs/2007.12540v1
- Date: Fri, 24 Jul 2020 14:44:46 GMT
- Title: Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference
- Authors: Menelaos Kanakis, David Bruggemann, Suman Saha, Stamatios Georgoulis,
Anton Obukhov, Luc Van Gool
- Abstract summary: Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
- Score: 75.95287293847697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task networks are commonly utilized to alleviate the need for a large
number of highly specialized single-task networks. However, two common
challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously
incorporating information from new tasks without forgetting the previously
learned ones (incremental learning). Second, eliminating adverse interactions
amongst tasks, which has been shown to significantly degrade the single-task
performance in a multi-task setup (task interference). In this paper, we show
that both can be achieved simply by reparameterizing the convolutions of
standard neural network architectures into a non-trainable shared part (filter
bank) and task-specific parts (modulators), where each modulator has a fraction
of the filter bank parameters. Thus, our reparameterization enables the model
to learn new tasks without adversely affecting the performance of existing
ones. The results of our ablation study attest the efficacy of the proposed
reparameterization. Moreover, our method achieves state-of-the-art on two
challenging multi-task learning benchmarks, PASCAL-Context and NYUD, and also
demonstrates superior incremental learning capability as compared to its close
competitors.
Related papers
- An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale
Multitask Learning Systems [4.675744559395732]
Multitask learning assumes that models capable of learning from multiple tasks can achieve better quality and efficiency via knowledge transfer.
State of the art ML models rely on high customization for each task and leverage size and data scale rather than scaling the number of tasks.
We propose an evolutionary method that can generate a large scale multitask model and can support the dynamic and continuous addition of new tasks.
arXiv Detail & Related papers (2022-05-25T13:10:47Z) - Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners [67.5865966762559]
We study whether sparsely activated Mixture-of-Experts (MoE) improve multi-task learning.
We devise task-aware gating functions to route examples from different tasks to specialized experts.
This results in a sparsely activated multi-task model with a large number of parameters, but with the same computational cost as that of a dense model.
arXiv Detail & Related papers (2022-04-16T00:56:12Z) - Combining Modular Skills in Multitask Learning [149.8001096811708]
A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks.
In this work, we assume each task is associated with a subset of latent discrete skills from a (potentially small) inventory.
We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning.
arXiv Detail & Related papers (2022-02-28T16:07:19Z) - In Defense of the Unitary Scalarization for Deep Multi-Task Learning [121.76421174107463]
We present a theoretical analysis suggesting that many specialized multi-tasks can be interpreted as forms of regularization.
We show that, when coupled with standard regularization and stabilization techniques, unitary scalarization matches or improves upon the performance of complex multitasks.
arXiv Detail & Related papers (2022-01-11T18:44:17Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Knowledge Distillation for Multi-task Learning [38.20005345733544]
Multi-task learning (MTL) is to learn one single model that performs multiple tasks for achieving good performance on all tasks and lower cost on computation.
Learning such a model requires to jointly optimize losses of a set of tasks with different difficulty levels, magnitudes, and characteristics.
We propose a knowledge distillation based method in this work to address the imbalance problem in multi-task learning.
arXiv Detail & Related papers (2020-07-14T08:02:42Z) - Gradient Surgery for Multi-Task Learning [119.675492088251]
Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks.
The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood.
We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient.
arXiv Detail & Related papers (2020-01-19T06:33:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.