Learning Abstract and Transferable Representations for Planning
- URL: http://arxiv.org/abs/2205.02092v1
- Date: Wed, 4 May 2022 14:40:04 GMT
- Title: Learning Abstract and Transferable Representations for Planning
- Authors: Steven James, Benjamin Rosman, George Konidaris
- Abstract summary: We propose a framework for autonomously learning state abstractions of an agent's environment.
These abstractions are task-independent, and so can be reused to solve new tasks.
We show how to combine these portable representations with problem-specific ones to generate a sound description of a specific task.
- Score: 25.63560394067908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are concerned with the question of how an agent can acquire its own
representations from sensory data. We restrict our focus to learning
representations for long-term planning, a class of problems that
state-of-the-art learning methods are unable to solve. We propose a framework
for autonomously learning state abstractions of an agent's environment, given a
set of skills. Importantly, these abstractions are task-independent, and so can
be reused to solve new tasks. We demonstrate how an agent can use an existing
set of options to acquire representations from ego- and object-centric
observations. These abstractions can immediately be reused by the same agent in
new environments. We show how to combine these portable representations with
problem-specific ones to generate a sound description of a specific task that
can be used for abstract planning. Finally, we show how to autonomously
construct a multi-level hierarchy consisting of increasingly abstract
representations. Since these hierarchies are transferable, higher-order
concepts can be reused in new tasks, relieving the agent from relearning them
and improving sample efficiency. Our results demonstrate that our approach
allows an agent to transfer previous knowledge to new tasks, improving sample
efficiency as the number of tasks increases.
Related papers
- VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Gradient-based inference of abstract task representations for generalization in neural networks [5.794537047184604]
We show that gradients backpropagated through a neural network to a task representation layer are an efficient way to infer current task demands.
We demonstrate that gradient-based inference provides higher learning efficiency and generalization to novel tasks and limits.
arXiv Detail & Related papers (2024-07-24T15:28:08Z) - Emergence and Function of Abstract Representations in Self-Supervised
Transformers [0.0]
We study the inner workings of small-scale transformers trained to reconstruct partially masked visual scenes.
We show that the network develops intermediate abstract representations, or abstractions, that encode all semantic features of the dataset.
Using precise manipulation experiments, we demonstrate that abstractions are central to the network's decision-making process.
arXiv Detail & Related papers (2023-12-08T20:47:15Z) - State Representations as Incentives for Reinforcement Learning Agents: A Sim2Real Analysis on Robotic Grasping [3.4777703321218225]
This work examines the effect of various representations in incentivizing the agent to solve a specific robotic task.
A continuum of state representations is defined, starting from hand-crafted numerical states to encoded image-based representations.
The effects of each representation on the ability of the agent to solve the task in simulation and the transferability of the learned policy to the real robot are examined.
arXiv Detail & Related papers (2023-09-21T11:41:22Z) - Contextual Pre-planning on Reward Machine Abstractions for Enhanced
Transfer in Deep Reinforcement Learning [20.272179949107514]
Deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes.
We propose a novel approach to representing the current task using reward machines (RMs)
Our method provides agents with symbolic representations of optimal transitions from their current abstract state and rewards them for achieving these transitions.
arXiv Detail & Related papers (2023-07-11T12:28:05Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Discrete State-Action Abstraction via the Successor Representation [3.453310639983932]
Abstraction is one approach that provides the agent with an intrinsic reward for transitioning in a latent space.
Our approach is the first for automatically learning a discrete abstraction of the underlying environment.
Our proposed algorithm, Discrete State-Action Abstraction (DSAA), iteratively swaps between training these options and using them to efficiently explore more of the environment.
arXiv Detail & Related papers (2022-06-07T17:37:30Z) - An Empirical Investigation of Representation Learning for Imitation [76.48784376425911]
Recent work in vision, reinforcement learning, and NLP has shown that auxiliary representation learning objectives can reduce the need for large amounts of expensive, task-specific data.
We propose a modular framework for constructing representation learning algorithms, then use our framework to evaluate the utility of representation learning for imitation.
arXiv Detail & Related papers (2022-05-16T11:23:42Z) - Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon
Reasoning [120.38381203153159]
Reinforcement learning can train policies that effectively perform complex tasks.
For long-horizon tasks, the performance of these methods degrades with horizon, often necessitating reasoning over and composing lower-level skills.
We propose Value Function Spaces: a simple approach that produces such a representation by using the value functions corresponding to each lower-level skill.
arXiv Detail & Related papers (2021-11-04T22:46:16Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - Sequential Transfer in Reinforcement Learning with a Generative Model [48.40219742217783]
We show how to reduce the sample complexity for learning new tasks by transferring knowledge from previously-solved ones.
We derive PAC bounds on its sample complexity which clearly demonstrate the benefits of using this kind of prior knowledge.
We empirically verify our theoretical findings in simple simulated domains.
arXiv Detail & Related papers (2020-07-01T19:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.