Creating Multi-Level Skill Hierarchies in Reinforcement Learning
- URL: http://arxiv.org/abs/2306.09980v2
- Date: Wed, 17 Jan 2024 15:03:18 GMT
- Title: Creating Multi-Level Skill Hierarchies in Reinforcement Learning
- Authors: Joshua B. Evans and \"Ozg\"ur \c{S}im\c{s}ek
- Abstract summary: We propose an answer based on a graphical representation of how the interaction between an agent and its environment may unfold.
Our approach uses modularity maximisation as a central organising principle to expose the structure of the interaction graph at multiple levels of abstraction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: What is a useful skill hierarchy for an autonomous agent? We propose an
answer based on a graphical representation of how the interaction between an
agent and its environment may unfold. Our approach uses modularity maximisation
as a central organising principle to expose the structure of the interaction
graph at multiple levels of abstraction. The result is a collection of skills
that operate at varying time scales, organised into a hierarchy, where skills
that operate over longer time scales are composed of skills that operate over
shorter time scales. The entire skill hierarchy is generated automatically,
with no human intervention, including the skills themselves (their behaviour,
when they can be called, and when they terminate) as well as the hierarchical
dependency structure between them. In a wide range of environments, this
approach generates skill hierarchies that are intuitively appealing and that
considerably improve the learning performance of the agent.
Related papers
- Reinforcement Learning with Options and State Representation [105.82346211739433]
This thesis aims to explore the reinforcement learning field and build on existing methods to produce improved ones.
It addresses such goals by decomposing learning tasks in a hierarchical fashion known as Hierarchical Reinforcement Learning.
arXiv Detail & Related papers (2024-03-16T08:30:55Z) - SkillDiffuser: Interpretable Hierarchical Planning via Skill Abstractions in Diffusion-Based Task Execution [75.2573501625811]
Diffusion models have demonstrated strong potential for robotic trajectory planning.
generating coherent trajectories from high-level instructions remains challenging.
We propose SkillDiffuser, an end-to-end hierarchical planning framework.
arXiv Detail & Related papers (2023-12-18T18:16:52Z) - Progressively Efficient Learning [58.6490456517954]
We develop a novel learning framework named Communication-Efficient Interactive Learning (CEIL)
CEIL leads to emergence of a human-like pattern where the learner and the teacher communicate efficiently by exchanging increasingly more abstract intentions.
Agents trained with CEIL quickly master new tasks, outperforming non-hierarchical and hierarchical imitation learning by up to 50% and 20% in absolute success rate.
arXiv Detail & Related papers (2023-10-13T07:52:04Z) - Hierarchical Empowerment: Towards Tractable Empowerment-Based Skill
Learning [65.41865750258775]
General purpose agents will require large repertoires of skills.
We introduce a new framework, Hierarchical Empowerment, that makes computing empowerment more tractable.
In a popular ant navigation domain, our four level agents are able to learn skills that cover a surface area over two orders of magnitude larger than prior work.
arXiv Detail & Related papers (2023-07-06T02:27:05Z) - Learning Temporally Extended Skills in Continuous Domains as Symbolic
Actions for Planning [2.642698101441705]
Problems which require both long-horizon planning and continuous control capabilities pose significant challenges to existing reinforcement learning agents.
We introduce a novel hierarchical reinforcement learning agent which links temporally extended skills for continuous control with a forward model in a symbolic abstraction of the environment's state for planning.
arXiv Detail & Related papers (2022-07-11T17:13:10Z) - Skill Machines: Temporal Logic Skill Composition in Reinforcement Learning [13.049516752695613]
We propose a framework where an agent learns a sufficient set of skill primitives to achieve all high-level goals in its environment.
The agent can then flexibly compose them both logically and temporally to provably achieve temporal logic specifications in any regular language.
This provides the agent with the ability to map from complex temporal logic task specifications to near-optimal behaviours zero-shot.
arXiv Detail & Related papers (2022-05-25T07:05:24Z) - Autonomous Open-Ended Learning of Tasks with Non-Stationary
Interdependencies [64.0476282000118]
Intrinsic motivations have proven to generate a task-agnostic signal to properly allocate the training time amongst goals.
While the majority of works in the field of intrinsically motivated open-ended learning focus on scenarios where goals are independent from each other, only few of them studied the autonomous acquisition of interdependent tasks.
In particular, we first deepen the analysis of a previous system, showing the importance of incorporating information about the relationships between tasks at a higher level of the architecture.
Then we introduce H-GRAIL, a new system that extends the previous one by adding a new learning layer to store the autonomously acquired sequences
arXiv Detail & Related papers (2022-05-16T10:43:01Z) - Possibility Before Utility: Learning And Using Hierarchical Affordances [21.556661319375255]
Reinforcement learning algorithms struggle on tasks with complex hierarchical dependency structures.
We present Hierarchical Affordance Learning (HAL), a method that learns a model of hierarchical affordances in order to prune impossible subtasks for more effective learning.
arXiv Detail & Related papers (2022-03-23T19:17:22Z) - Hierarchical Skills for Efficient Exploration [70.62309286348057]
In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration.
Prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design.
We propose a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner.
arXiv Detail & Related papers (2021-10-20T22:29:32Z) - Self-supervised Reinforcement Learning with Independently Controllable
Subgoals [20.29444813790076]
Self-supervised agents set their own goals by exploiting the structure in the environment.
Some of them were applied to learn basic manipulation skills in compositional multi-object environments.
We propose a novel self-supervised agent that estimates relations between environment components and uses them to independently control different parts of the environment state.
arXiv Detail & Related papers (2021-09-09T10:21:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.