C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks
- URL: http://arxiv.org/abs/2110.12080v1
- Date: Fri, 22 Oct 2021 22:05:31 GMT
- Title: C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks
- Authors: Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey
Levine, Joseph E. Gonzalez
- Abstract summary: Goal-conditioned reinforcement learning can solve tasks in a wide range of domains, including navigation and manipulation.
We propose the distant goal-reaching task by using search at training time to automatically generate intermediate states.
E-step corresponds to planning an optimal sequence of waypoints using graph search, while the M-step aims to learn a goal-conditioned policy to reach those waypoints.
- Score: 133.40619754674066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Goal-conditioned reinforcement learning (RL) can solve tasks in a wide range
of domains, including navigation and manipulation, but learning to reach
distant goals remains a central challenge to the field. Learning to reach such
goals is particularly hard without any offline data, expert demonstrations, and
reward shaping. In this paper, we propose an algorithm to solve the distant
goal-reaching task by using search at training time to automatically generate a
curriculum of intermediate states. Our algorithm, Classifier-Planning
(C-Planning), frames the learning of the goal-conditioned policies as
expectation maximization: the E-step corresponds to planning an optimal
sequence of waypoints using graph search, while the M-step aims to learn a
goal-conditioned policy to reach those waypoints. Unlike prior methods that
combine goal-conditioned RL with graph search, ours performs search only during
training and not testing, significantly decreasing the compute costs of
deploying the learned policy. Empirically, we demonstrate that our method is
more sample efficient than prior methods. Moreover, it is able to solve very
long horizons manipulation and navigation tasks, tasks that prior
goal-conditioned methods and methods based on graph search fail to solve.
Related papers
- CQM: Curriculum Reinforcement Learning with a Quantized World Model [30.21954044028645]
We propose a novel curriculum method that automatically defines the semantic goal space which contains vital information for the curriculum process.
Ours suggests uncertainty and temporal distance-aware curriculum goals that converge to the final goals over the automatically composed goal space.
Also, ours outperforms the state-of-the-art curriculum RL methods on data efficiency and performance, in various goal-reaching tasks even with ego-centric visual inputs.
arXiv Detail & Related papers (2023-10-26T11:50:58Z) - HIQL: Offline Goal-Conditioned RL with Latent States as Actions [81.67963770528753]
We propose a hierarchical algorithm for goal-conditioned RL from offline data.
We show how this hierarchical decomposition makes our method robust to noise in the estimated value function.
Our method can solve long-horizon tasks that stymie prior methods, can scale to high-dimensional image observations, and can readily make use of action-free data.
arXiv Detail & Related papers (2023-07-22T00:17:36Z) - Imitating Graph-Based Planning with Goal-Conditioned Policies [72.61631088613048]
We present a self-imitation scheme which distills a subgoal-conditioned policy into the target-goal-conditioned policy.
We empirically show that our method can significantly boost the sample-efficiency of the existing goal-conditioned RL methods.
arXiv Detail & Related papers (2023-03-20T14:51:10Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in
Latent Space [76.46113138484947]
General-purpose robots require diverse repertoires of behaviors to complete challenging tasks in real-world unstructured environments.
To address this issue, goal-conditioned reinforcement learning aims to acquire policies that can reach goals for a wide range of tasks on command.
We propose Planning to Practice, a method that makes it practical to train goal-conditioned policies for long-horizon tasks.
arXiv Detail & Related papers (2022-05-17T06:58:17Z) - C-Learning: Horizon-Aware Cumulative Accessibility Estimation [29.588146016880284]
We introduce the concept of cumulative accessibility functions, which measure the reachability of a goal from a given state within a specified horizon.
We show that these functions obey a recurrence relation, which enables learning from offline interactions.
We evaluate our approach on a set of multi-goal discrete and continuous control tasks.
arXiv Detail & Related papers (2020-11-24T20:34:31Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.