HIQL: Offline Goal-Conditioned RL with Latent States as Actions
- URL: http://arxiv.org/abs/2307.11949v4
- Date: Sun, 10 Mar 2024 04:26:48 GMT
- Title: HIQL: Offline Goal-Conditioned RL with Latent States as Actions
- Authors: Seohong Park, Dibya Ghosh, Benjamin Eysenbach, Sergey Levine
- Abstract summary: We propose a hierarchical algorithm for goal-conditioned RL from offline data.
We show how this hierarchical decomposition makes our method robust to noise in the estimated value function.
Our method can solve long-horizon tasks that stymie prior methods, can scale to high-dimensional image observations, and can readily make use of action-free data.
- Score: 81.67963770528753
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised pre-training has recently become the bedrock for computer vision
and natural language processing. In reinforcement learning (RL),
goal-conditioned RL can potentially provide an analogous self-supervised
approach for making use of large quantities of unlabeled (reward-free) data.
However, building effective algorithms for goal-conditioned RL that can learn
directly from diverse offline data is challenging, because it is hard to
accurately estimate the exact value function for faraway goals. Nonetheless,
goal-reaching problems exhibit structure, such that reaching distant goals
entails first passing through closer subgoals. This structure can be very
useful, as assessing the quality of actions for nearby goals is typically
easier than for more distant goals. Based on this idea, we propose a
hierarchical algorithm for goal-conditioned RL from offline data. Using one
action-free value function, we learn two policies that allow us to exploit this
structure: a high-level policy that treats states as actions and predicts (a
latent representation of) a subgoal and a low-level policy that predicts the
action for reaching this subgoal. Through analysis and didactic examples, we
show how this hierarchical decomposition makes our method robust to noise in
the estimated value function. We then apply our method to offline goal-reaching
benchmarks, showing that our method can solve long-horizon tasks that stymie
prior methods, can scale to high-dimensional image observations, and can
readily make use of action-free data. Our code is available at
https://seohong.me/projects/hiql/
Related papers
- Foundation Policies with Hilbert Representations [54.44869979017766]
We propose an unsupervised framework to pre-train generalist policies from unlabeled offline data.
Our key insight is to learn a structured representation that preserves the temporal structure of the underlying environment.
Our experiments show that our unsupervised policies can solve goal-conditioned and general RL tasks in a zero-shot fashion.
arXiv Detail & Related papers (2024-02-23T19:09:10Z) - GOPlan: Goal-conditioned Offline Reinforcement Learning by Planning with Learned Models [31.628341050846768]
Goal-conditioned Offline Planning (GOPlan) is a novel model-based framework that contains two key phases.
GOPlan pretrains a prior policy capable of capturing multi-modal action distribution within the multi-goal dataset.
The reanalysis method generates high-quality imaginary data by planning with learned models for both intra-trajectory and inter-trajectory goals.
arXiv Detail & Related papers (2023-10-30T21:19:52Z) - Goal-Conditioned Imitation Learning using Score-based Diffusion Policies [3.49482137286472]
We propose a new policy representation based on score-based diffusion models (SDMs)
We apply our new policy representation in the domain of Goal-Conditioned Imitation Learning (GCIL)
We show how BESO can even be used to learn a goal-independent policy from play-data usingintuitive-free guidance.
arXiv Detail & Related papers (2023-04-05T15:52:34Z) - Imitating Graph-Based Planning with Goal-Conditioned Policies [72.61631088613048]
We present a self-imitation scheme which distills a subgoal-conditioned policy into the target-goal-conditioned policy.
We empirically show that our method can significantly boost the sample-efficiency of the existing goal-conditioned RL methods.
arXiv Detail & Related papers (2023-03-20T14:51:10Z) - Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in
Latent Space [76.46113138484947]
General-purpose robots require diverse repertoires of behaviors to complete challenging tasks in real-world unstructured environments.
To address this issue, goal-conditioned reinforcement learning aims to acquire policies that can reach goals for a wide range of tasks on command.
We propose Planning to Practice, a method that makes it practical to train goal-conditioned policies for long-horizon tasks.
arXiv Detail & Related papers (2022-05-17T06:58:17Z) - Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning [71.52722621691365]
Building generalizable goal-conditioned agents from rich observations is a key to reinforcement learning (RL) solving real world problems.
We propose a new form of state abstraction called goal-conditioned bisimulation.
We learn this representation using a metric form of this abstraction, and show its ability to generalize to new goals in simulation manipulation tasks.
arXiv Detail & Related papers (2022-04-27T17:00:11Z) - C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks [133.40619754674066]
Goal-conditioned reinforcement learning can solve tasks in a wide range of domains, including navigation and manipulation.
We propose the distant goal-reaching task by using search at training time to automatically generate intermediate states.
E-step corresponds to planning an optimal sequence of waypoints using graph search, while the M-step aims to learn a goal-conditioned policy to reach those waypoints.
arXiv Detail & Related papers (2021-10-22T22:05:31Z) - C-Learning: Horizon-Aware Cumulative Accessibility Estimation [29.588146016880284]
We introduce the concept of cumulative accessibility functions, which measure the reachability of a goal from a given state within a specified horizon.
We show that these functions obey a recurrence relation, which enables learning from offline interactions.
We evaluate our approach on a set of multi-goal discrete and continuous control tasks.
arXiv Detail & Related papers (2020-11-24T20:34:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.