Learning Multi-Task Transferable Rewards via Variational Inverse
Reinforcement Learning
- URL: http://arxiv.org/abs/2206.09498v1
- Date: Sun, 19 Jun 2022 22:32:41 GMT
- Title: Learning Multi-Task Transferable Rewards via Variational Inverse
Reinforcement Learning
- Authors: Se-Wook Yoo, Seung-Woo Seo
- Abstract summary: We extend an empowerment-based regularization technique to situations with multiple tasks based on the framework of a generative adversarial network.
Under the multitask environments with unknown dynamics, we focus on learning a reward and policy from unlabeled expert examples.
Our proposed method derives the variational lower bound of the situational mutual information to optimize it.
- Score: 10.782043595405831
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many robotic tasks are composed of a lot of temporally correlated sub-tasks
in a highly complex environment. It is important to discover situational
intentions and proper actions by deliberating on temporal abstractions to solve
problems effectively. To understand the intention separated from changing task
dynamics, we extend an empowerment-based regularization technique to situations
with multiple tasks based on the framework of a generative adversarial network.
Under the multitask environments with unknown dynamics, we focus on learning a
reward and policy from the unlabeled expert examples. In this study, we define
situational empowerment as the maximum of mutual information representing how
an action conditioned on both a certain state and sub-task affects the future.
Our proposed method derives the variational lower bound of the situational
mutual information to optimize it. We simultaneously learn the transferable
multi-task reward function and policy by adding an induced term to the
objective function. By doing so, the multi-task reward function helps to learn
a robust policy for environmental change. We validate the advantages of our
approach on multi-task learning and multi-task transfer learning. We
demonstrate our proposed method has the robustness of both randomness and
changing task dynamics. Finally, we prove that our method has significantly
better performance and data efficiency than existing imitation learning methods
on various benchmarks.
Related papers
- Active Fine-Tuning of Generalist Policies [54.65568433408307]
We propose AMF (Active Multi-task Fine-tuning) to maximize multi-task policy performance under a limited demonstration budget.
We derive performance guarantees for AMF under regularity assumptions and demonstrate its empirical effectiveness in complex and high-dimensional environments.
arXiv Detail & Related papers (2024-10-07T13:26:36Z) - Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning [79.07658065326592]
Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning.
We provide novel multitask confidence intervals in the challenging setting when neither the similarity between tasks nor the tasks' features are available to the learner.
We propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance.
arXiv Detail & Related papers (2023-08-03T13:08:09Z) - Saliency-Regularized Deep Multi-Task Learning [7.3810864598379755]
Multitask learning enforces multiple learning tasks to share knowledge to improve their generalization abilities.
Modern deep multitask learning can jointly learn latent features and task sharing, but they are obscure in task relation.
This paper proposes a new multitask learning framework that jointly learns latent features and explicit task relations.
arXiv Detail & Related papers (2022-07-03T20:26:44Z) - An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale
Multitask Learning Systems [4.675744559395732]
Multitask learning assumes that models capable of learning from multiple tasks can achieve better quality and efficiency via knowledge transfer.
State of the art ML models rely on high customization for each task and leverage size and data scale rather than scaling the number of tasks.
We propose an evolutionary method that can generate a large scale multitask model and can support the dynamic and continuous addition of new tasks.
arXiv Detail & Related papers (2022-05-25T13:10:47Z) - Human-Centered Prior-Guided and Task-Dependent Multi-Task Representation
Learning for Action Recognition Pre-Training [8.571437792425417]
We propose a novel action recognition pre-training framework, which exploits human-centered prior knowledge that generates more informative representation.
Specifically, we distill knowledge from a human parsing model to enrich the semantic capability of representation.
In addition, we combine knowledge distillation with contrastive learning to constitute a task-dependent multi-task framework.
arXiv Detail & Related papers (2022-04-27T06:51:31Z) - Leveraging convergence behavior to balance conflicting tasks in
multi-task learning [3.6212652499950138]
Multi-Task Learning uses correlated tasks to improve performance generalization.
Tasks often conflict with each other, which makes it challenging to define how the gradients of multiple tasks should be combined.
We propose a method that takes into account temporal behaviour of the gradients to create a dynamic bias that adjust the importance of each task during the backpropagation.
arXiv Detail & Related papers (2022-04-14T01:52:34Z) - Modular Adaptive Policy Selection for Multi-Task Imitation Learning
through Task Division [60.232542918414985]
Multi-task learning often suffers from negative transfer, sharing information that should be task-specific.
This is done by using proto-policies as modules to divide the tasks into simple sub-behaviours that can be shared.
We also demonstrate its ability to autonomously divide the tasks into both shared and task-specific sub-behaviours.
arXiv Detail & Related papers (2022-03-28T15:53:17Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Measuring and Harnessing Transference in Multi-Task Learning [58.48659733262734]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We analyze the dynamics of information transfer, or transference, across tasks throughout training.
arXiv Detail & Related papers (2020-10-29T08:25:43Z) - Gradient Surgery for Multi-Task Learning [119.675492088251]
Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks.
The reasons why multi-task learning is so challenging compared to single-task learning are not fully understood.
We propose a form of gradient surgery that projects a task's gradient onto the normal plane of the gradient of any other task that has a conflicting gradient.
arXiv Detail & Related papers (2020-01-19T06:33:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.