Exploring Relational Context for Multi-Task Dense Prediction
- URL: http://arxiv.org/abs/2104.13874v1
- Date: Wed, 28 Apr 2021 16:45:56 GMT
- Title: Exploring Relational Context for Multi-Task Dense Prediction
- Authors: David Bruggemann, Menelaos Kanakis, Anton Obukhov, Stamatios
Georgoulis, Luc Van Gool
- Abstract summary: We consider a multi-task environment for dense prediction tasks, represented by a common backbone and independent task-specific heads.
We explore various attention-based contexts, such as global and local, in the multi-task setting.
We propose an Adaptive Task-Relational Context module, which samples the pool of all available contexts for each task pair.
- Score: 76.86090370115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The timeline of computer vision research is marked with advances in learning
and utilizing efficient contextual representations. Most of them, however, are
targeted at improving model performance on a single downstream task. We
consider a multi-task environment for dense prediction tasks, represented by a
common backbone and independent task-specific heads. Our goal is to find the
most efficient way to refine each task prediction by capturing cross-task
contexts dependent on tasks' relations. We explore various attention-based
contexts, such as global and local, in the multi-task setting and analyze their
behavior when applied to refine each task independently. Empirical findings
confirm that different source-target task pairs benefit from different context
types. To automate the selection process, we propose an Adaptive
Task-Relational Context (ATRC) module, which samples the pool of all available
contexts for each task pair using neural architecture search and outputs the
optimal configuration for deployment. Our method achieves state-of-the-art
performance on two important multi-task benchmarks, namely NYUD-v2 and
PASCAL-Context. The proposed ATRC has a low computational toll and can be used
as a drop-in refinement module for any supervised multi-task architecture.
Related papers
- Cross-Task Affinity Learning for Multitask Dense Scene Predictions [5.939164722752263]
Multitask learning (MTL) has become prominent for its ability to predict multiple tasks jointly.
We introduce the Cross-Task Affinity Learning (CTAL) module, a lightweight framework that enhances task refinement in multitask networks.
Our results demonstrate state-of-the-art MTL performance for both CNN and transformer backbones, using significantly fewer parameters than single-task learning.
arXiv Detail & Related papers (2024-01-20T05:31:47Z) - A Dynamic Feature Interaction Framework for Multi-task Visual Perception [100.98434079696268]
We devise an efficient unified framework to solve multiple common perception tasks.
These tasks include instance segmentation, semantic segmentation, monocular 3D detection, and depth estimation.
Our proposed framework, termed D2BNet, demonstrates a unique approach to parameter-efficient predictions for multi-task perception.
arXiv Detail & Related papers (2023-06-08T09:24:46Z) - Prompt Tuning with Soft Context Sharing for Vision-Language Models [42.61889428498378]
We propose a novel method to tune pre-trained vision-language models on multiple target few-shot tasks jointly.
We show that SoftCPT significantly outperforms single-task prompt tuning methods.
arXiv Detail & Related papers (2022-08-29T10:19:10Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - On Steering Multi-Annotations per Sample for Multi-Task Learning [79.98259057711044]
The study of multi-task learning has drawn great attention from the community.
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
Previous works attempt to modify the gradients from different tasks. Yet these methods give a subjective assumption of the relationship between tasks, and the modified gradient may be less accurate.
In this paper, we introduce Task Allocation(STA), a mechanism that addresses this issue by a task allocation approach, in which each sample is randomly allocated a subset of tasks.
For further progress, we propose Interleaved Task Allocation(ISTA) to iteratively allocate all
arXiv Detail & Related papers (2022-03-06T11:57:18Z) - Semi-supervised Multi-task Learning for Semantics and Depth [88.77716991603252]
Multi-Task Learning (MTL) aims to enhance the model generalization by sharing representations between related tasks for better performance.
We propose the Semi-supervised Multi-Task Learning (MTL) method to leverage the available supervisory signals from different datasets.
We present a domain-aware discriminator structure with various alignment formulations to mitigate the domain discrepancy issue among datasets.
arXiv Detail & Related papers (2021-10-14T07:43:39Z) - Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic
Conditional Random Fields [67.51177964010967]
We compare different models for low resource multi-task sequence tagging that leverage dependencies between label sequences for different tasks.
We find that explicit modeling of inter-dependencies between task predictions outperforms single-task as well as standard multi-task models.
arXiv Detail & Related papers (2020-05-01T07:11:34Z) - Deeper Task-Specificity Improves Joint Entity and Relation Extraction [0.0]
Multi-task learning (MTL) is an effective method for learning related tasks, but designing MTL models requires deciding which and how many parameters should be task-specific.
We propose a novel neural architecture that allows for deeper task-specificity than does prior work.
arXiv Detail & Related papers (2020-02-15T18:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.