Successor Feature Landmarks for Long-Horizon Goal-Conditioned
Reinforcement Learning
- URL: http://arxiv.org/abs/2111.09858v1
- Date: Thu, 18 Nov 2021 18:36:05 GMT
- Title: Successor Feature Landmarks for Long-Horizon Goal-Conditioned
Reinforcement Learning
- Authors: Christopher Hoang, Sungryull Sohn, Jongwook Choi, Wilka Carvalho,
Honglak Lee
- Abstract summary: We introduce Successor Feature Landmarks (SFL), a framework for exploring large, high-dimensional environments.
SFL drives exploration by estimating state-novelty and enables high-level planning by abstracting the state-space as a non-parametric landmark-based graph.
We show in our experiments on MiniGrid and ViZDoom that SFL enables efficient exploration of large, high-dimensional state spaces.
- Score: 54.378444600773875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Operating in the real-world often requires agents to learn about a complex
environment and apply this understanding to achieve a breadth of goals. This
problem, known as goal-conditioned reinforcement learning (GCRL), becomes
especially challenging for long-horizon goals. Current methods have tackled
this problem by augmenting goal-conditioned policies with graph-based planning
algorithms. However, they struggle to scale to large, high-dimensional state
spaces and assume access to exploration mechanisms for efficiently collecting
training data. In this work, we introduce Successor Feature Landmarks (SFL), a
framework for exploring large, high-dimensional environments so as to obtain a
policy that is proficient for any goal. SFL leverages the ability of successor
features (SF) to capture transition dynamics, using it to drive exploration by
estimating state-novelty and to enable high-level planning by abstracting the
state-space as a non-parametric landmark-based graph. We further exploit SF to
directly compute a goal-conditioned policy for inter-landmark traversal, which
we use to execute plans to "frontier" landmarks at the edge of the explored
state space. We show in our experiments on MiniGrid and ViZDoom that SFL
enables efficient exploration of large, high-dimensional state spaces and
outperforms state-of-the-art baselines on long-horizon GCRL tasks.
Related papers
- Offline Policy Learning via Skill-step Abstraction for Long-horizon Goal-Conditioned Tasks [7.122367852177223]
We present an offline GC policy learning' framework tailored for tackling long-horizon GC tasks.
In the framework, a GC policy is progressively learned offline in conjunction with the incremental modeling of skill-step abstractions on the data.
We demonstrate the superiority and efficiency of our GLvSA framework in adapting GC policies to a wide range of long-horizon goals.
arXiv Detail & Related papers (2024-08-21T03:05:06Z) - HIQL: Offline Goal-Conditioned RL with Latent States as Actions [81.67963770528753]
We propose a hierarchical algorithm for goal-conditioned RL from offline data.
We show how this hierarchical decomposition makes our method robust to noise in the estimated value function.
Our method can solve long-horizon tasks that stymie prior methods, can scale to high-dimensional image observations, and can readily make use of action-free data.
arXiv Detail & Related papers (2023-07-22T00:17:36Z) - Efficient Learning of High Level Plans from Play [57.29562823883257]
We present Efficient Learning of High-Level Plans from Play (ELF-P), a framework for robotic learning that bridges motion planning and deep RL.
We demonstrate that ELF-P has significantly better sample efficiency than relevant baselines over multiple realistic manipulation tasks.
arXiv Detail & Related papers (2023-03-16T20:09:47Z) - Goal Exploration Augmentation via Pre-trained Skills for Sparse-Reward
Long-Horizon Goal-Conditioned Reinforcement Learning [6.540225358657128]
Reinforcement learning (RL) often struggles to accomplish a sparse-reward long-horizon task in a complex environment.
Goal-conditioned reinforcement learning (GCRL) has been employed to tackle this difficult problem via a curriculum of easy-to-reach sub-goals.
In GCRL, exploring novel sub-goals is essential for the agent to ultimately find the pathway to the desired goal.
arXiv Detail & Related papers (2022-10-28T11:11:04Z) - Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object
Transport [83.06265788137443]
We address key challenges in long-horizon embodied exploration and navigation by proposing a new object transport task and a novel modular framework for temporally extended navigation.
Our first contribution is the design of a novel Long-HOT environment focused on deep exploration and long-horizon planning.
We propose a modular hierarchical transport policy (HTP) that builds a topological graph of the scene to perform exploration with the help of weighted frontiers.
arXiv Detail & Related papers (2022-10-28T05:30:49Z) - Landmark-Guided Subgoal Generation in Hierarchical Reinforcement
Learning [64.97599673479678]
We present HIerarchical reinforcement learning Guided by Landmarks (HIGL)
HIGL is a novel framework for training a high-level policy with a reduced action space guided by landmarks.
Our experiments demonstrate that our framework outperforms prior-arts across a variety of control tasks.
arXiv Detail & Related papers (2021-10-26T12:16:19Z) - Model-Based Reinforcement Learning via Latent-Space Collocation [110.04005442935828]
We argue that it is easier to solve long-horizon tasks by planning sequences of states rather than just actions.
We adapt the idea of collocation, which has shown good results on long-horizon tasks in optimal control literature, to the image-based setting by utilizing learned latent state space models.
arXiv Detail & Related papers (2021-06-24T17:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.