Learning When and What to Ask: a Hierarchical Reinforcement Learning
Framework
- URL: http://arxiv.org/abs/2110.08258v1
- Date: Thu, 14 Oct 2021 01:30:36 GMT
- Title: Learning When and What to Ask: a Hierarchical Reinforcement Learning
Framework
- Authors: Khanh Nguyen, Yonatan Bisk, Hal Daum\'e III
- Abstract summary: We formulate a hierarchical reinforcement learning framework for learning to decide when to request additional information from humans.
Results on a simulated human-assisted navigation problem demonstrate the effectiveness of our framework.
- Score: 17.017688226277834
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reliable AI agents should be mindful of the limits of their knowledge and
consult humans when sensing that they do not have sufficient knowledge to make
sound decisions. We formulate a hierarchical reinforcement learning framework
for learning to decide when to request additional information from humans and
what type of information would be helpful to request. Our framework extends
partially-observed Markov decision processes (POMDPs) by allowing an agent to
interact with an assistant to leverage their knowledge in accomplishing tasks.
Results on a simulated human-assisted navigation problem demonstrate the
effectiveness of our framework: aided with an interaction policy learned by our
method, a navigation policy achieves up to a 7x improvement in task success
rate compared to performing tasks only by itself. The interaction policy is
also efficient: on average, only a quarter of all actions taken during a task
execution are requests for information. We analyze benefits and challenges of
learning with a hierarchical policy structure and suggest directions for future
work.
Related papers
- Learning to Look: Seeking Information for Decision Making via Policy Factorization [36.87799092971961]
We propose DISaM, a dual-policy solution composed of an information-seeking policy and an information-receiving policy.
We demonstrate the capabilities of our dual policy solution in five manipulation tasks that require information-seeking behaviors.
arXiv Detail & Related papers (2024-10-24T17:58:11Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Ask more, know better: Reinforce-Learned Prompt Questions for Decision
Making with Large Language Models [18.409654309062027]
Large language models (LLMs) combine action-based policies with chain of thought (CoT) reasoning.
Human intervention is also required to develop grounding functions that ensure low-level controllers appropriately process CoT reasoning.
We propose a comprehensive training framework for complex task-solving, incorporating human prior knowledge into the learning of action policies.
arXiv Detail & Related papers (2023-10-27T13:19:19Z) - Explaining Agent's Decision-making in a Hierarchical Reinforcement
Learning Scenario [0.6643086804649938]
Reinforcement learning is a machine learning approach based on behavioral psychology.
In this work, we make use of the memory-based explainable reinforcement learning method in a hierarchical environment composed of sub-tasks.
arXiv Detail & Related papers (2022-12-14T01:18:45Z) - Option-Aware Adversarial Inverse Reinforcement Learning for Robotic
Control [44.77500987121531]
Hierarchical Imitation Learning (HIL) has been proposed to recover highly-complex behaviors in long-horizon tasks from expert demonstrations.
We develop a novel HIL algorithm based on Adversarial Inverse Reinforcement Learning.
We also propose a Variational Autoencoder framework for learning with our objectives in an end-to-end fashion.
arXiv Detail & Related papers (2022-10-05T00:28:26Z) - Rethinking Learning Dynamics in RL using Adversarial Networks [79.56118674435844]
We present a learning mechanism for reinforcement learning of closely related skills parameterized via a skill embedding space.
The main contribution of our work is to formulate an adversarial training regime for reinforcement learning with the help of entropy-regularized policy gradient formulation.
arXiv Detail & Related papers (2022-01-27T19:51:09Z) - Hierarchical Skills for Efficient Exploration [70.62309286348057]
In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration.
Prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design.
We propose a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner.
arXiv Detail & Related papers (2021-10-20T22:29:32Z) - Goal-Conditioned Reinforcement Learning with Imagined Subgoals [89.67840168694259]
We propose to incorporate imagined subgoals into policy learning to facilitate learning of complex tasks.
Imagined subgoals are predicted by a separate high-level policy, which is trained simultaneously with the policy and its critic.
We evaluate our approach on complex robotic navigation and manipulation tasks and show that it outperforms existing methods by a large margin.
arXiv Detail & Related papers (2021-07-01T15:30:59Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z) - Coverage as a Principle for Discovering Transferable Behavior in
Reinforcement Learning [16.12658895065585]
We argue that representation alone is not enough for efficient transfer in challenging domains and explore how to transfer knowledge through behavior.
The behavior of pre-trained policies may be used for solving the task at hand (exploitation) or for collecting useful data to solve the problem (exploration)
arXiv Detail & Related papers (2021-02-24T16:51:02Z) - Towards Coordinated Robot Motions: End-to-End Learning of Motion
Policies on Transform Trees [63.31965375413414]
We propose to solve multi-task problems through learning structured policies from human demonstrations.
Our structured policy is inspired by RMPflow, a framework for combining subtask policies on different spaces.
We derive an end-to-end learning objective function that is suitable for the multi-task problem.
arXiv Detail & Related papers (2020-12-24T22:46:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.