Controllable Dynamic Multi-Task Architectures
- URL: http://arxiv.org/abs/2203.14949v1
- Date: Mon, 28 Mar 2022 17:56:40 GMT
- Title: Controllable Dynamic Multi-Task Architectures
- Authors: Dripta S. Raychaudhuri, Yumin Suh, Samuel Schulter, Xiang Yu, Masoud
Faraki, Amit K. Roy-Chowdhury, Manmohan Chandraker
- Abstract summary: We propose a controllable multi-task network that dynamically adjusts its architecture and weights to match the desired task preference as well as the resource constraints.
We propose a disentangled training of two hypernetworks, by exploiting task affinity and a novel branching regularized loss, to take input preferences and accordingly predict tree-structured models with adapted weights.
- Score: 92.74372912009127
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-task learning commonly encounters competition for resources among
tasks, specifically when model capacity is limited. This challenge motivates
models which allow control over the relative importance of tasks and total
compute cost during inference time. In this work, we propose such a
controllable multi-task network that dynamically adjusts its architecture and
weights to match the desired task preference as well as the resource
constraints. In contrast to the existing dynamic multi-task approaches that
adjust only the weights within a fixed architecture, our approach affords the
flexibility to dynamically control the total computational cost and match the
user-preferred task importance better. We propose a disentangled training of
two hypernetworks, by exploiting task affinity and a novel branching
regularized loss, to take input preferences and accordingly predict
tree-structured models with adapted weights. Experiments on three multi-task
benchmarks, namely PASCAL-Context, NYU-v2, and CIFAR-100, show the efficacy of
our approach. Project page is available at https://www.nec-labs.com/~mas/DYMU.
Related papers
- AdapMTL: Adaptive Pruning Framework for Multitask Learning Model [5.643658120200373]
AdapMTL is an adaptive pruning framework for multitask models.
It balances sparsity allocation and accuracy performance across multiple tasks.
It showcases superior performance compared to state-of-the-art pruning methods.
arXiv Detail & Related papers (2024-08-07T17:19:15Z) - Multi-Objective Optimization for Sparse Deep Multi-Task Learning [0.0]
We present a Multi-Objective Optimization algorithm using a modified Weighted Chebyshev scalarization for training Deep Neural Networks (DNNs)
Our work aims to address the (economical and also ecological) sustainability issue of DNN models, with particular focus on Deep Multi-Task models.
arXiv Detail & Related papers (2023-08-23T16:42:27Z) - JiuZhang 2.0: A Unified Chinese Pre-trained Language Model for
Multi-task Mathematical Problem Solving [77.51817534090789]
We propose textbfJiuZhang2.0, a unified Chinese PLM specially for multi-task mathematical problem solving.
Our idea is to maintain a moderate-sized model and employ the emphcross-task knowledge sharing to improve the model capacity in a multi-task setting.
arXiv Detail & Related papers (2023-06-19T15:45:36Z) - Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners [67.5865966762559]
We study whether sparsely activated Mixture-of-Experts (MoE) improve multi-task learning.
We devise task-aware gating functions to route examples from different tasks to specialized experts.
This results in a sparsely activated multi-task model with a large number of parameters, but with the same computational cost as that of a dense model.
arXiv Detail & Related papers (2022-04-16T00:56:12Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - A Tree-Structured Multi-Task Model Recommender [25.445073413243925]
Tree-structured multi-task architectures have been employed to tackle multiple vision tasks in the context of multi-task learning (MTL)
This paper proposes a recommender that automatically suggests tree-structured multi-task architectures that could achieve a high task performance while meeting a user-specified computation budget without performing model training.
Extensive evaluations on popular MTL benchmarks show that the recommended architectures could achieve competitive task accuracy and computation efficiency compared with state-of-the-art MTL methods.
arXiv Detail & Related papers (2022-03-10T00:09:43Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Controllable Pareto Multi-Task Learning [55.945680594691076]
A multi-task learning system aims at solving multiple related tasks at the same time.
With a fixed model capacity, the tasks would be conflicted with each other, and the system usually has to make a trade-off among learning all of them together.
This work proposes a novel controllable multi-task learning framework, to enable the system to make real-time trade-off control among different tasks with a single model.
arXiv Detail & Related papers (2020-10-13T11:53:55Z) - Dynamic Task Weighting Methods for Multi-task Networks in Autonomous
Driving Systems [10.625400639764734]
Deep multi-task networks are of particular interest for autonomous driving systems.
We propose a novel method combining evolutionary meta-learning and task-based selective backpropagation.
Our method outperforms state-of-the-art methods by a significant margin on a two-task application.
arXiv Detail & Related papers (2020-01-07T18:54:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.