Multi-Task Learning for Dense Prediction Tasks: A Survey
- URL: http://arxiv.org/abs/2004.13379v3
- Date: Sun, 24 Jan 2021 18:56:09 GMT
- Title: Multi-Task Learning for Dense Prediction Tasks: A Survey
- Authors: Simon Vandenhende, Stamatios Georgoulis, Wouter Van Gansbeke, Marc
Proesmans, Dengxin Dai and Luc Van Gool
- Abstract summary: Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
- Score: 87.66280582034838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advent of deep learning, many dense prediction tasks, i.e. tasks
that produce pixel-level predictions, have seen significant performance
improvements. The typical approach is to learn these tasks in isolation, that
is, a separate neural network is trained for each individual task. Yet, recent
multi-task learning (MTL) techniques have shown promising results w.r.t.
performance, computations and/or memory footprint, by jointly tackling multiple
tasks through a learned shared representation. In this survey, we provide a
well-rounded view on state-of-the-art deep learning approaches for MTL in
computer vision, explicitly emphasizing on dense prediction tasks. Our
contributions concern the following. First, we consider MTL from a network
architecture point-of-view. We include an extensive overview and discuss the
advantages/disadvantages of recent popular MTL models. Second, we examine
various optimization methods to tackle the joint learning of multiple tasks. We
summarize the qualitative elements of these works and explore their
commonalities and differences. Finally, we provide an extensive experimental
evaluation across a variety of dense prediction benchmarks to examine the pros
and cons of the different methods, including both architectural and
optimization based strategies.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Multi-Task Cooperative Learning via Searching for Flat Minima [8.835287696319641]
We propose to formulate MTL as a multi/bi-level optimization problem, and therefore force features to learn from each task in a cooperative approach.
Specifically, we update the sub-model for each task alternatively taking advantage of the learned sub-models of the other tasks.
To alleviate the negative transfer problem during the optimization, we search for flat minima for the current objective function.
arXiv Detail & Related papers (2023-09-21T14:00:11Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Multi-Task Self-Supervised Learning for Image Segmentation Task [0.0]
The paper presents 1. Self-supervised techniques to boost semantic segmentation performance using multi-task learning with Depth prediction and Surface Normalization.
2. Performance evaluation of the different types of weighing techniques (UW, Nash-MTL) used for Multi-task learning.
arXiv Detail & Related papers (2023-02-05T21:25:59Z) - Multi-Task Learning for Visual Scene Understanding [7.191593674138455]
This thesis is concerned with multi-task learning in the context of computer vision.
We propose several methods that tackle important aspects of multi-task learning.
The results show several advances in the state-of-the-art of multi-task learning.
arXiv Detail & Related papers (2022-03-28T16:57:58Z) - On Steering Multi-Annotations per Sample for Multi-Task Learning [79.98259057711044]
The study of multi-task learning has drawn great attention from the community.
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
Previous works attempt to modify the gradients from different tasks. Yet these methods give a subjective assumption of the relationship between tasks, and the modified gradient may be less accurate.
In this paper, we introduce Task Allocation(STA), a mechanism that addresses this issue by a task allocation approach, in which each sample is randomly allocated a subset of tasks.
For further progress, we propose Interleaved Task Allocation(ISTA) to iteratively allocate all
arXiv Detail & Related papers (2022-03-06T11:57:18Z) - Semi-supervised Multi-task Learning for Semantics and Depth [88.77716991603252]
Multi-Task Learning (MTL) aims to enhance the model generalization by sharing representations between related tasks for better performance.
We propose the Semi-supervised Multi-Task Learning (MTL) method to leverage the available supervisory signals from different datasets.
We present a domain-aware discriminator structure with various alignment formulations to mitigate the domain discrepancy issue among datasets.
arXiv Detail & Related papers (2021-10-14T07:43:39Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Multi-Task Learning with Deep Neural Networks: A Survey [0.0]
Multi-task learning (MTL) is a subfield of machine learning in which multiple tasks are simultaneously learned by a shared model.
We give an overview of multi-task learning methods for deep neural networks, with the aim of summarizing both the well-established and most recent directions within the field.
arXiv Detail & Related papers (2020-09-10T19:31:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.