CompositeTasking: Understanding Images by Spatial Composition of Tasks
- URL: http://arxiv.org/abs/2012.09030v1
- Date: Wed, 16 Dec 2020 15:47:02 GMT
- Title: CompositeTasking: Understanding Images by Spatial Composition of Tasks
- Authors: Nikola Popovic, Danda Pani Paudel, Thomas Probst, Guolei Sun, Luc Van
Gool
- Abstract summary: CompositeTasking is the fusion of multiple, spatially distributed tasks.
The proposed network takes a pair of an image and a set of pixel-wise dense tasks as inputs, and makes the task related predictions for each pixel.
It not only offers us a compact network for multi-tasking, but also allows for task-editing.
- Score: 85.95743368954233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We define the concept of CompositeTasking as the fusion of multiple,
spatially distributed tasks, for various aspects of image understanding.
Learning to perform spatially distributed tasks is motivated by the frequent
availability of only sparse labels across tasks, and the desire for a compact
multi-tasking network. To facilitate CompositeTasking, we introduce a novel
task conditioning model -- a single encoder-decoder network that performs
multiple, spatially varying tasks at once. The proposed network takes a pair of
an image and a set of pixel-wise dense tasks as inputs, and makes the task
related predictions for each pixel, which includes the decision of applying
which task where. As to the latter, we learn the composition of tasks that
needs to be performed according to some CompositeTasking rules. It not only
offers us a compact network for multi-tasking, but also allows for
task-editing. The strength of the proposed method is demonstrated by only
having to supply sparse supervision per task. The obtained results are on par
with our baselines that use dense supervision and a multi-headed multi-tasking
design. The source code will be made publicly available at
www.github.com/nikola3794/composite-tasking .
Related papers
- TaskExpert: Dynamically Assembling Multi-Task Representations with
Memorial Mixture-of-Experts [11.608682595506354]
Recent models consider directly decoding task-specific features from one shared task-generic feature.
As the input feature is fully shared and each task decoder also shares decoding parameters for different input samples, it leads to a static feature decoding process.
We propose TaskExpert, a novel multi-task mixture-of-experts model that enables learning multiple representative task-generic feature spaces.
arXiv Detail & Related papers (2023-07-28T06:00:57Z) - Prompt Tuning with Soft Context Sharing for Vision-Language Models [42.61889428498378]
We propose a novel method to tune pre-trained vision-language models on multiple target few-shot tasks jointly.
We show that SoftCPT significantly outperforms single-task prompt tuning methods.
arXiv Detail & Related papers (2022-08-29T10:19:10Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners [67.5865966762559]
We study whether sparsely activated Mixture-of-Experts (MoE) improve multi-task learning.
We devise task-aware gating functions to route examples from different tasks to specialized experts.
This results in a sparsely activated multi-task model with a large number of parameters, but with the same computational cost as that of a dense model.
arXiv Detail & Related papers (2022-04-16T00:56:12Z) - Modular Adaptive Policy Selection for Multi-Task Imitation Learning
through Task Division [60.232542918414985]
Multi-task learning often suffers from negative transfer, sharing information that should be task-specific.
This is done by using proto-policies as modules to divide the tasks into simple sub-behaviours that can be shared.
We also demonstrate its ability to autonomously divide the tasks into both shared and task-specific sub-behaviours.
arXiv Detail & Related papers (2022-03-28T15:53:17Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z) - Navigating the Trade-Off between Multi-Task Learning and Learning to
Multitask in Deep Neural Networks [9.278739724750343]
Multi-task learning refers to a paradigm in machine learning in which a network is trained on various related tasks to facilitate the acquisition of tasks.
multitasking is used to indicate, especially in the cognitive science literature, the ability to execute multiple tasks simultaneously.
We show that the same tension arises in deep networks and discuss a meta-learning algorithm for an agent to manage this trade-off in an unfamiliar environment.
arXiv Detail & Related papers (2020-07-20T23:26:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.