Multi-Task Reinforcement Learning with Context-based Representations
- URL: http://arxiv.org/abs/2102.06177v1
- Date: Thu, 11 Feb 2021 18:41:27 GMT
- Title: Multi-Task Reinforcement Learning with Context-based Representations
- Authors: Shagun Sodhani, Amy Zhang, Joelle Pineau
- Abstract summary: We propose an efficient approach to knowledge transfer through the use of multiple context-dependent, composable representations across a family of tasks.
We use the proposed approach to obtain state-of-the-art results in Meta-World, a challenging multi-task benchmark consisting of 50 distinct robotic manipulation tasks.
- Score: 43.93866702838777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The benefit of multi-task learning over single-task learning relies on the
ability to use relations across tasks to improve performance on any single
task. While sharing representations is an important mechanism to share
information across tasks, its success depends on how well the structure
underlying the tasks is captured. In some real-world situations, we have access
to metadata, or additional information about a task, that may not provide any
new insight in the context of a single task setup alone but inform relations
across multiple tasks. While this metadata can be useful for improving
multi-task learning performance, effectively incorporating it can be an
additional challenge. We posit that an efficient approach to knowledge transfer
is through the use of multiple context-dependent, composable representations
shared across a family of tasks. In this framework, metadata can help to learn
interpretable representations and provide the context to inform which
representations to compose and how to compose them. We use the proposed
approach to obtain state-of-the-art results in Meta-World, a challenging
multi-task benchmark consisting of 50 distinct robotic manipulation tasks.
Related papers
- Leveraging knowledge distillation for partial multi-task learning from multiple remote sensing datasets [2.1178416840822023]
Partial multi-task learning where training examples are annotated for one of the target tasks is a promising idea in remote sensing.
This paper proposes using knowledge distillation to replace the need of ground truths for the alternate task and enhance the performance of such approach.
arXiv Detail & Related papers (2024-05-24T09:48:50Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Musketeer: Joint Training for Multi-task Vision Language Model with Task Explanation Prompts [75.75548749888029]
We present a vision-language model whose parameters are jointly trained on all tasks and fully shared among multiple heterogeneous tasks.
With a single model, Musketeer achieves results comparable to or better than strong baselines trained on single tasks, almost uniformly across multiple tasks.
arXiv Detail & Related papers (2023-05-11T17:57:49Z) - Saliency-Regularized Deep Multi-Task Learning [7.3810864598379755]
Multitask learning enforces multiple learning tasks to share knowledge to improve their generalization abilities.
Modern deep multitask learning can jointly learn latent features and task sharing, but they are obscure in task relation.
This paper proposes a new multitask learning framework that jointly learns latent features and explicit task relations.
arXiv Detail & Related papers (2022-07-03T20:26:44Z) - Modular Adaptive Policy Selection for Multi-Task Imitation Learning
through Task Division [60.232542918414985]
Multi-task learning often suffers from negative transfer, sharing information that should be task-specific.
This is done by using proto-policies as modules to divide the tasks into simple sub-behaviours that can be shared.
We also demonstrate its ability to autonomously divide the tasks into both shared and task-specific sub-behaviours.
arXiv Detail & Related papers (2022-03-28T15:53:17Z) - Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big
Task [24.618094251341958]
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.
We propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks.
Our experimental results demonstrate its effectiveness in comparison with the state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-07T02:46:47Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Exploring Relational Context for Multi-Task Dense Prediction [76.86090370115]
We consider a multi-task environment for dense prediction tasks, represented by a common backbone and independent task-specific heads.
We explore various attention-based contexts, such as global and local, in the multi-task setting.
We propose an Adaptive Task-Relational Context module, which samples the pool of all available contexts for each task pair.
arXiv Detail & Related papers (2021-04-28T16:45:56Z) - Context-Aware Multi-Task Learning for Traffic Scene Recognition in
Autonomous Vehicles [10.475998113861895]
We propose an algorithm to jointly learn the task-specific and shared representations by adopting a multi-task learning network.
Experiments on the large-scale dataset HSD demonstrate the effectiveness and superiority of our network over state-of-the-art methods.
arXiv Detail & Related papers (2020-04-03T03:09:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.