Efficient Computation Sharing for Multi-Task Visual Scene Understanding
- URL: http://arxiv.org/abs/2303.09663v2
- Date: Mon, 14 Aug 2023 19:20:28 GMT
- Title: Efficient Computation Sharing for Multi-Task Visual Scene Understanding
- Authors: Sara Shoouri, Mingyu Yang, Zichen Fan, Hun-Seok Kim
- Abstract summary: Multi-task learning can conserve resources by sharing knowledge across different tasks.
We present a novel- and parameter-sharing framework that balances efficiency and accuracy to perform multiple visual tasks.
- Score: 16.727967046330125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving multiple visual tasks using individual models can be
resource-intensive, while multi-task learning can conserve resources by sharing
knowledge across different tasks. Despite the benefits of multi-task learning,
such techniques can struggle with balancing the loss for each task, leading to
potential performance degradation. We present a novel computation- and
parameter-sharing framework that balances efficiency and accuracy to perform
multiple visual tasks utilizing individually-trained single-task transformers.
Our method is motivated by transfer learning schemes to reduce computational
and parameter storage costs while maintaining the desired performance. Our
approach involves splitting the tasks into a base task and the other sub-tasks,
and sharing a significant portion of activations and parameters/weights between
the base and sub-tasks to decrease inter-task redundancies and enhance
knowledge sharing. The evaluation conducted on NYUD-v2 and PASCAL-context
datasets shows that our method is superior to the state-of-the-art
transformer-based multi-task learning techniques with higher accuracy and
reduced computational resources. Moreover, our method is extended to video
stream inputs, further reducing computational costs by efficiently sharing
information across the temporal domain as well as the task domain. Our codes
and models will be publicly available.
Related papers
- An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale
Multitask Learning Systems [4.675744559395732]
Multitask learning assumes that models capable of learning from multiple tasks can achieve better quality and efficiency via knowledge transfer.
State of the art ML models rely on high customization for each task and leverage size and data scale rather than scaling the number of tasks.
We propose an evolutionary method that can generate a large scale multitask model and can support the dynamic and continuous addition of new tasks.
arXiv Detail & Related papers (2022-05-25T13:10:47Z) - Exploring the Role of Task Transferability in Large-Scale Multi-Task
Learning [28.104054292437525]
We disentangle the effect of scale and relatedness of tasks in multi-task representation learning.
If the target tasks are known ahead of time, then training on a smaller set of related tasks is competitive to the large-scale multi-task training.
arXiv Detail & Related papers (2022-04-23T18:11:35Z) - Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners [67.5865966762559]
We study whether sparsely activated Mixture-of-Experts (MoE) improve multi-task learning.
We devise task-aware gating functions to route examples from different tasks to specialized experts.
This results in a sparsely activated multi-task model with a large number of parameters, but with the same computational cost as that of a dense model.
arXiv Detail & Related papers (2022-04-16T00:56:12Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z) - Measuring and Harnessing Transference in Multi-Task Learning [58.48659733262734]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We analyze the dynamics of information transfer, or transference, across tasks throughout training.
arXiv Detail & Related papers (2020-10-29T08:25:43Z) - HydaLearn: Highly Dynamic Task Weighting for Multi-task Learning with
Auxiliary Tasks [4.095907708855597]
Multi-task learning (MTL) can improve performance on a task by sharing representations with one or more related auxiliary-tasks.
Usually, MTL-networks are trained on a composite loss function formed by a constant weighted combination of the separate task losses.
In practice, constant loss weights lead to poor results for two reasons: (i) for mini-batch based optimisation, the optimal task weights vary significantly from one update to the next depending on mini-batch sample composition.
We introduce HydaLearn, an intelligent weighting algorithm that connects main-task gain to the individual task gradients, in order to inform
arXiv Detail & Related papers (2020-08-26T16:04:02Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Knowledge Distillation for Multi-task Learning [38.20005345733544]
Multi-task learning (MTL) is to learn one single model that performs multiple tasks for achieving good performance on all tasks and lower cost on computation.
Learning such a model requires to jointly optimize losses of a set of tasks with different difficulty levels, magnitudes, and characteristics.
We propose a knowledge distillation based method in this work to address the imbalance problem in multi-task learning.
arXiv Detail & Related papers (2020-07-14T08:02:42Z) - Gradient Surgery for Multi-Task Learning [119.675492088251]
Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks.
The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood.
We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient.
arXiv Detail & Related papers (2020-01-19T06:33:47Z) - Multitask learning over graphs: An Approach for Distributed, Streaming
Machine Learning [46.613346075513206]
Multitask learning is an approach to inductive transfer learning.
Recent years have witnessed an increasing ability to collect data in a distributed and streaming manner.
This requires the design of new strategies for learning jointly multiple tasks from streaming data over distributed (or networked) systems.
arXiv Detail & Related papers (2020-01-07T15:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.