Instance-Level Task Parameters: A Robust Multi-task Weighting Framework
- URL: http://arxiv.org/abs/2106.06129v1
- Date: Fri, 11 Jun 2021 02:35:42 GMT
- Title: Instance-Level Task Parameters: A Robust Multi-task Weighting Framework
- Authors: Pavan Kumar Anasosalu Vasu, Shreyas Saxena, Oncel Tuzel
- Abstract summary: Recent works have shown that deep neural networks benefit from multi-task learning by learning a shared representation across several related tasks.
We let the training process dictate the optimal weighting of tasks for every instance in the dataset.
We conduct extensive experiments on SURREAL and CityScapes datasets, for human shape and pose estimation, depth estimation and semantic segmentation tasks.
- Score: 17.639472693362926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent works have shown that deep neural networks benefit from multi-task
learning by learning a shared representation across several related tasks.
However, performance of such systems depend on relative weighting between
various losses involved during training. Prior works on loss weighting schemes
assume that instances are equally easy or hard for all tasks. In order to break
this assumption, we let the training process dictate the optimal weighting of
tasks for every instance in the dataset. More specifically, we equip every
instance in the dataset with a set of learnable parameters (instance-level task
parameters) where the cardinality is equal to the number of tasks learned by
the model. These parameters model the weighting of each task for an instance.
They are updated by gradient descent and do not require hand-crafted rules. We
conduct extensive experiments on SURREAL and CityScapes datasets, for human
shape and pose estimation, depth estimation and semantic segmentation tasks. In
these tasks, our approach outperforms recent dynamic loss weighting approaches,
e.g. reducing surface estimation errors by 8.97% on SURREAL. When applied to
datasets where one or more tasks can have noisy annotations, the proposed
method learns to prioritize learning from clean labels for a given task, e.g.
reducing surface estimation errors by up to 60%. We also show that we can
reliably detect corrupt labels for a given task as a by-product from learned
instance-level task parameters.
Related papers
- Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Task Difficulty Aware Parameter Allocation & Regularization for Lifelong
Learning [20.177260510548535]
We propose the Allocation & Regularization (PAR), which adaptively select an appropriate strategy for each task from parameter allocation and regularization based on its learning difficulty.
Our method is scalable and significantly reduces the model's redundancy while improving the model's performance.
arXiv Detail & Related papers (2023-04-11T15:38:21Z) - AdaTask: A Task-aware Adaptive Learning Rate Approach to Multi-task
Learning [19.201899503691266]
We measure the task dominance degree of a parameter by the total updates of each task on this parameter.
We propose a Task-wise Adaptive learning rate approach, AdaTask, to separate the emphaccumulative gradients and hence the learning rate of each task.
Experiments on computer vision and recommender system MTL datasets demonstrate that AdaTask significantly improves the performance of dominated tasks.
arXiv Detail & Related papers (2022-11-28T04:24:38Z) - DiSparse: Disentangled Sparsification for Multitask Model Compression [92.84435347164435]
DiSparse is a simple, effective, and first-of-its-kind multitask pruning and sparse training scheme.
Our experimental results demonstrate superior performance on various configurations and settings.
arXiv Detail & Related papers (2022-06-09T17:57:46Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - On Steering Multi-Annotations per Sample for Multi-Task Learning [79.98259057711044]
The study of multi-task learning has drawn great attention from the community.
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
Previous works attempt to modify the gradients from different tasks. Yet these methods give a subjective assumption of the relationship between tasks, and the modified gradient may be less accurate.
In this paper, we introduce Task Allocation(STA), a mechanism that addresses this issue by a task allocation approach, in which each sample is randomly allocated a subset of tasks.
For further progress, we propose Interleaved Task Allocation(ISTA) to iteratively allocate all
arXiv Detail & Related papers (2022-03-06T11:57:18Z) - TAG: Task-based Accumulated Gradients for Lifelong learning [21.779858050277475]
We propose a task-aware system that adapts the learning rate based on the relatedness among tasks.
We empirically show that our proposed adaptive learning rate not only accounts for catastrophic forgetting but also allows positive backward transfer.
arXiv Detail & Related papers (2021-05-11T16:10:32Z) - Combat Data Shift in Few-shot Learning with Knowledge Graph [42.59886121530736]
In real-world applications, few-shot learning paradigm often suffers from data shift.
Most existing few-shot learning approaches are not designed with the consideration of data shift.
We propose a novel metric-based meta-learning framework to extract task-specific representations and task-shared representations.
arXiv Detail & Related papers (2021-01-27T12:35:18Z) - Parameter-Efficient Transfer Learning with Diff Pruning [108.03864629388404]
diff pruning is a simple approach to enable parameter-efficient transfer learning within the pretrain-finetune framework.
We find that models finetuned with diff pruning can match the performance of fully finetuned baselines on the GLUE benchmark.
arXiv Detail & Related papers (2020-12-14T12:34:01Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.