Skill Machines: Temporal Logic Skill Composition in Reinforcement Learning
- URL: http://arxiv.org/abs/2205.12532v2
- Date: Sat, 16 Mar 2024 10:46:06 GMT
- Title: Skill Machines: Temporal Logic Skill Composition in Reinforcement Learning
- Authors: Geraud Nangue Tasse, Devon Jarvis, Steven James, Benjamin Rosman,
- Abstract summary: We propose a framework where an agent learns a sufficient set of skill primitives to achieve all high-level goals in its environment.
The agent can then flexibly compose them both logically and temporally to provably achieve temporal logic specifications in any regular language.
This provides the agent with the ability to map from complex temporal logic task specifications to near-optimal behaviours zero-shot.
- Score: 13.049516752695613
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is desirable for an agent to be able to solve a rich variety of problems that can be specified through language in the same environment. A popular approach towards obtaining such agents is to reuse skills learned in prior tasks to generalise compositionally to new ones. However, this is a challenging problem due to the curse of dimensionality induced by the combinatorially large number of ways high-level goals can be combined both logically and temporally in language. To address this problem, we propose a framework where an agent first learns a sufficient set of skill primitives to achieve all high-level goals in its environment. The agent can then flexibly compose them both logically and temporally to provably achieve temporal logic specifications in any regular language, such as regular fragments of linear temporal logic. This provides the agent with the ability to map from complex temporal logic task specifications to near-optimal behaviours zero-shot. We demonstrate this experimentally in a tabular setting, as well as in a high-dimensional video game and continuous control environment. Finally, we also demonstrate that the performance of skill machines can be improved with regular off-policy reinforcement learning algorithms when optimal behaviours are desired.
Related papers
- RL-GPT: Integrating Reinforcement Learning and Code-as-policy [82.1804241891039]
We introduce a two-level hierarchical framework, RL-GPT, comprising a slow agent and a fast agent.
The slow agent analyzes actions suitable for coding, while the fast agent executes coding tasks.
This decomposition effectively focuses each agent on specific tasks, proving highly efficient within our pipeline.
arXiv Detail & Related papers (2024-02-29T16:07:22Z) - Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models [68.18370230899102]
We investigate how to elicit compositional generalization capabilities in large language models (LLMs)
We find that demonstrating both foundational skills and compositional examples grounded in these skills within the same prompt context is crucial.
We show that fine-tuning LLMs with SKiC-style data can elicit zero-shot weak-to-strong generalization.
arXiv Detail & Related papers (2023-08-01T05:54:12Z) - Creating Multi-Level Skill Hierarchies in Reinforcement Learning [0.0]
We propose an answer based on a graphical representation of how the interaction between an agent and its environment may unfold.
Our approach uses modularity maximisation as a central organising principle to expose the structure of the interaction graph at multiple levels of abstraction.
arXiv Detail & Related papers (2023-06-16T17:23:49Z) - Thalamus: a brain-inspired algorithm for biologically-plausible
continual learning and disentangled representations [0.0]
Animals thrive in a constantly changing environment and leverage the temporal structure to learn causal representations.
We introduce a simple algorithm that uses optimization at inference time to generate internal representations of temporal context.
We show that a network trained on a series of tasks using traditional weight updates can infer tasks dynamically.
We then alternate between the weight updates and the latent updates to arrive at Thalamus, a task-agnostic algorithm capable of discovering disentangled representations in a stream of unlabeled tasks.
arXiv Detail & Related papers (2022-05-24T01:29:21Z) - Possibility Before Utility: Learning And Using Hierarchical Affordances [21.556661319375255]
Reinforcement learning algorithms struggle on tasks with complex hierarchical dependency structures.
We present Hierarchical Affordance Learning (HAL), a method that learns a model of hierarchical affordances in order to prune impossible subtasks for more effective learning.
arXiv Detail & Related papers (2022-03-23T19:17:22Z) - Environment Generation for Zero-Shot Compositional Reinforcement
Learning [105.35258025210862]
Compositional Design of Environments (CoDE) trains a Generator agent to automatically build a series of compositional tasks tailored to the agent's current skill level.
We learn to generate environments composed of multiple pages or rooms, and train RL agents capable of completing wide-range of complex tasks in those environments.
CoDE yields 4x higher success rate than the strongest baseline, and demonstrates strong performance of real websites learned on 3500 primitive tasks.
arXiv Detail & Related papers (2022-01-21T21:35:01Z) - Hierarchical Skills for Efficient Exploration [70.62309286348057]
In reinforcement learning, pre-trained low-level skills have the potential to greatly facilitate exploration.
Prior knowledge of the downstream task is required to strike the right balance between generality (fine-grained control) and specificity (faster learning) in skill design.
We propose a hierarchical skill learning framework that acquires skills of varying complexity in an unsupervised manner.
arXiv Detail & Related papers (2021-10-20T22:29:32Z) - Evolving Hierarchical Memory-Prediction Machines in Multi-Task
Reinforcement Learning [4.030910640265943]
Behavioural agents must generalize across a variety of environments and objectives over time.
We use genetic programming to evolve highly-generalized agents capable of operating in six unique environments from the control literature.
We show that emergent hierarchical structure in the evolving programs leads to multi-task agents that succeed by performing a temporal decomposition and encoding of the problem environments in memory.
arXiv Detail & Related papers (2021-06-23T21:34:32Z) - Multi-Agent Reinforcement Learning with Temporal Logic Specifications [65.79056365594654]
We study the problem of learning to satisfy temporal logic specifications with a group of agents in an unknown environment.
We develop the first multi-agent reinforcement learning technique for temporal logic specifications.
We provide correctness and convergence guarantees for our main algorithm.
arXiv Detail & Related papers (2021-02-01T01:13:03Z) - CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and
Transfer Learning [138.40338621974954]
CausalWorld is a benchmark for causal structure and transfer learning in a robotic manipulation environment.
Tasks consist of constructing 3D shapes from a given set of blocks - inspired by how children learn to build complex structures.
arXiv Detail & Related papers (2020-10-08T23:01:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.