Learning Functionally Decomposed Hierarchies for Continuous Control
Tasks with Path Planning
- URL: http://arxiv.org/abs/2002.05954v4
- Date: Wed, 6 Oct 2021 22:00:02 GMT
- Title: Learning Functionally Decomposed Hierarchies for Continuous Control
Tasks with Path Planning
- Authors: Sammy Christen, Lukas Jendele, Emre Aksan, Otmar Hilliges
- Abstract summary: We present HiDe, a novel hierarchical reinforcement learning architecture that successfully solves long horizon control tasks.
We experimentally show that our method generalizes across unseen test environments and can scale to 3x horizon length compared to both learning and non-learning based methods.
- Score: 36.050432925402845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present HiDe, a novel hierarchical reinforcement learning architecture
that successfully solves long horizon control tasks and generalizes to unseen
test scenarios. Functional decomposition between planning and low-level control
is achieved by explicitly separating the state-action spaces across the
hierarchy, which allows the integration of task-relevant knowledge per layer.
We propose an RL-based planner to efficiently leverage the information in the
planning layer of the hierarchy, while the control layer learns a
goal-conditioned control policy. The hierarchy is trained jointly but allows
for the modular transfer of policy layers across hierarchies of different
agents. We experimentally show that our method generalizes across unseen test
environments and can scale to 3x horizon length compared to both learning and
non-learning based methods. We evaluate on complex continuous control tasks
with sparse rewards, including navigation and robot manipulation.
Related papers
- On the benefits of pixel-based hierarchical policies for task generalization [7.207480346660617]
Reinforcement learning practitioners often avoid hierarchical policies, especially in image-based observation spaces.
We analyze the benefits of hierarchy through simulated multi-task robotic control experiments from pixels.
arXiv Detail & Related papers (2024-07-27T01:26:26Z) - Reinforcement Learning with Options and State Representation [105.82346211739433]
This thesis aims to explore the reinforcement learning field and build on existing methods to produce improved ones.
It addresses such goals by decomposing learning tasks in a hierarchical fashion known as Hierarchical Reinforcement Learning.
arXiv Detail & Related papers (2024-03-16T08:30:55Z) - Use All The Labels: A Hierarchical Multi-Label Contrastive Learning
Framework [75.79736930414715]
We present a hierarchical multi-label representation learning framework that can leverage all available labels and preserve the hierarchical relationship between classes.
We introduce novel hierarchy preserving losses, which jointly apply a hierarchical penalty to the contrastive loss, and enforce the hierarchy constraint.
arXiv Detail & Related papers (2022-04-27T21:41:44Z) - Provable Hierarchy-Based Meta-Reinforcement Learning [50.17896588738377]
We analyze HRL in the meta-RL setting, where learner learns latent hierarchical structure during meta-training for use in a downstream task.
We provide "diversity conditions" which, together with a tractable optimism-based algorithm, guarantee sample-efficient recovery of this natural hierarchy.
Our bounds incorporate common notions in HRL literature such as temporal and state/action abstractions, suggesting that our setting and analysis capture important features of HRL in practice.
arXiv Detail & Related papers (2021-10-18T17:56:02Z) - Compositional Reinforcement Learning from Logical Specifications [21.193231846438895]
Recent approaches automatically generate a reward function from a given specification and use a suitable reinforcement learning algorithm to learn a policy.
We develop a compositional learning approach, called DiRL, that interleaves high-level planning and reinforcement learning.
Our approach then incorporates reinforcement learning to learn neural network policies for each edge (sub-task) within a Dijkstra-style planning algorithm to compute a high-level plan in the graph.
arXiv Detail & Related papers (2021-06-25T22:54:28Z) - Learning Task Decomposition with Ordered Memory Policy Network [73.3813423684999]
We propose Ordered Memory Policy Network (OMPN) to discover subtask hierarchy by learning from demonstration.
OMPN can be applied to partially observable environments and still achieve higher task decomposition performance.
Our visualization confirms that the subtask hierarchy can emerge in our model.
arXiv Detail & Related papers (2021-03-19T18:13:35Z) - Distilling a Hierarchical Policy for Planning and Control via
Representation and Reinforcement Learning [18.415568038071306]
We present a hierarchical planning and control framework that enables an agent to perform various tasks and adapt to a new task flexibly.
Rather than learning an individual policy for each task, the proposed framework, DISH, distills a hierarchical policy from a set of tasks by representation and reinforcement learning.
arXiv Detail & Related papers (2020-11-16T23:58:49Z) - From proprioception to long-horizon planning in novel environments: A
hierarchical RL model [4.44317046648898]
In this work, we introduce a simple, three-level hierarchical architecture that reflects different types of reasoning.
We apply our method to a series of navigation tasks in the Mujoco Ant environment.
arXiv Detail & Related papers (2020-06-11T17:19:12Z) - Hierarchical Reinforcement Learning as a Model of Human Task
Interleaving [60.95424607008241]
We develop a hierarchical model of supervisory control driven by reinforcement learning.
The model reproduces known empirical effects of task interleaving.
The results support hierarchical RL as a plausible model of task interleaving.
arXiv Detail & Related papers (2020-01-04T17:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.