Contrastive Multi-Task Dense Prediction
- URL: http://arxiv.org/abs/2307.07934v1
- Date: Sun, 16 Jul 2023 03:54:01 GMT
- Title: Contrastive Multi-Task Dense Prediction
- Authors: Siwei Yang, Hanrong Ye, Dan Xu
- Abstract summary: A core objective in design is how to effectively model cross-task interactions to achieve a comprehensive improvement on different tasks.
We introduce feature-wise contrastive consistency into modeling the cross-task interactions for multi-task dense prediction.
We propose a novel multi-task contrastive regularization method based on the consistency to effectively boost the representation learning of the different sub-tasks.
- Score: 11.227696986100447
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper targets the problem of multi-task dense prediction which aims to
achieve simultaneous learning and inference on a bunch of multiple dense
prediction tasks in a single framework. A core objective in design is how to
effectively model cross-task interactions to achieve a comprehensive
improvement on different tasks based on their inherent complementarity and
consistency. Existing works typically design extra expensive distillation
modules to perform explicit interaction computations among different
task-specific features in both training and inference, bringing difficulty in
adaptation for different task sets, and reducing efficiency due to clearly
increased size of multi-task models. In contrast, we introduce feature-wise
contrastive consistency into modeling the cross-task interactions for
multi-task dense prediction. We propose a novel multi-task contrastive
regularization method based on the consistency to effectively boost the
representation learning of the different sub-tasks, which can also be easily
generalized to different multi-task dense prediction frameworks, and costs no
additional computation in the inference. Extensive experiments on two
challenging datasets (i.e. NYUD-v2 and Pascal-Context) clearly demonstrate the
superiority of the proposed multi-task contrastive learning approach for dense
predictions, establishing new state-of-the-art performances.
Related papers
- A Multitask Deep Learning Model for Classification and Regression of Hyperspectral Images: Application to the large-scale dataset [44.94304541427113]
We propose a multitask deep learning model to perform multiple classification and regression tasks simultaneously on hyperspectral images.
We validated our approach on a large hyperspectral dataset called TAIGA.
A comprehensive qualitative and quantitative analysis of the results shows that the proposed method significantly outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-23T11:14:54Z) - DiffusionMTL: Learning Multi-Task Denoising Diffusion Model from Partially Annotated Data [16.501973201535442]
We reformulate the partially-labeled multi-task dense prediction as a pixel-level denoising problem.
We propose a novel multi-task denoising framework coined as DiffusionMTL.
It designs a joint diffusion and denoising paradigm to model a potential noisy distribution in the task prediction or feature maps.
arXiv Detail & Related papers (2024-03-22T17:59:58Z) - Leveraging convergence behavior to balance conflicting tasks in
multi-task learning [3.6212652499950138]
Multi-Task Learning uses correlated tasks to improve performance generalization.
Tasks often conflict with each other, which makes it challenging to define how the gradients of multiple tasks should be combined.
We propose a method that takes into account temporal behaviour of the gradients to create a dynamic bias that adjust the importance of each task during the backpropagation.
arXiv Detail & Related papers (2022-04-14T01:52:34Z) - In Defense of the Unitary Scalarization for Deep Multi-Task Learning [121.76421174107463]
We present a theoretical analysis suggesting that many specialized multi-tasks can be interpreted as forms of regularization.
We show that, when coupled with standard regularization and stabilization techniques, unitary scalarization matches or improves upon the performance of complex multitasks.
arXiv Detail & Related papers (2022-01-11T18:44:17Z) - Cross-Task Consistency Learning Framework for Multi-Task Learning [9.991706230252708]
We propose a new learning framework for 2-task MTL problem.
We define two new loss terms inspired by cycle-consistency loss and contrastive learning.
We theoretically prove that both losses help the model learn more efficiently and that cross-task consistency loss is better in terms of alignment with the straight-forward predictions.
arXiv Detail & Related papers (2021-11-28T11:55:19Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Small Towers Make Big Differences [59.243296878666285]
Multi-task learning aims at solving multiple machine learning tasks at the same time.
A good solution to a multi-task learning problem should be generalizable in addition to being Pareto optimal.
We propose a method of under- parameterized self-auxiliaries for multi-task models to achieve the best of both worlds.
arXiv Detail & Related papers (2020-08-13T10:45:31Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - Gradient Surgery for Multi-Task Learning [119.675492088251]
Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks.
The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood.
We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient.
arXiv Detail & Related papers (2020-01-19T06:33:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.