Reinforcement Learning with Prototypical Representations
- URL: http://arxiv.org/abs/2102.11271v1
- Date: Mon, 22 Feb 2021 18:56:34 GMT
- Title: Reinforcement Learning with Prototypical Representations
- Authors: Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto
- Abstract summary: Proto-RL is a self-supervised framework that ties representation learning with exploration through prototypical representations.
These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations.
This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.
- Score: 114.35801511501639
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning effective representations in image-based environments is crucial for
sample efficient Reinforcement Learning (RL). Unfortunately, in RL,
representation learning is confounded with the exploratory experience of the
agent -- learning a useful representation requires diverse data, while
effective exploration is only possible with coherent representations.
Furthermore, we would like to learn representations that not only generalize
across tasks but also accelerate downstream exploration for efficient
task-specific training. To address these challenges we propose Proto-RL, a
self-supervised framework that ties representation learning with exploration
through prototypical representations. These prototypes simultaneously serve as
a summarization of the exploratory experience of an agent as well as a basis
for representing observations. We pre-train these task-agnostic representations
and prototypes on environments without downstream task information. This
enables state-of-the-art downstream policy learning on a set of difficult
continuous control tasks.
Related papers
- Improving Reinforcement Learning Efficiency with Auxiliary Tasks in
Non-Visual Environments: A Comparison [0.0]
This study compares common auxiliary tasks based on, to the best of our knowledge, the only decoupled representation learning method for low-dimensional non-visual observations.
Our findings show that representation learning with auxiliary tasks only provides performance gains in sufficiently complex environments.
arXiv Detail & Related papers (2023-10-06T13:22:26Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Accelerating exploration and representation learning with offline
pre-training [52.6912479800592]
We show that exploration and representation learning can be improved by separately learning two different models from a single offline dataset.
We show that learning a state representation using noise-contrastive estimation and a model of auxiliary reward can significantly improve the sample efficiency on the challenging NetHack benchmark.
arXiv Detail & Related papers (2023-03-31T18:03:30Z) - Provable Benefits of Representational Transfer in Reinforcement Learning [59.712501044999875]
We study the problem of representational transfer in RL, where an agent first pretrains in a number of source tasks to discover a shared representation.
We show that given generative access to source tasks, we can discover a representation, using which subsequent linear RL techniques quickly converge to a near-optimal policy.
arXiv Detail & Related papers (2022-05-29T04:31:29Z) - Visuomotor Control in Multi-Object Scenes Using Object-Aware
Representations [25.33452947179541]
We show the effectiveness of object-aware representation learning techniques for robotic tasks.
Our model learns control policies in a sample-efficient manner and outperforms state-of-the-art object techniques.
arXiv Detail & Related papers (2022-05-12T19:48:11Z) - Task-Induced Representation Learning [14.095897879222672]
We evaluate the effectiveness of representation learning approaches for decision making in visually complex environments.
We find that representation learning generally improves sample efficiency on unseen tasks even in visually complex scenes.
arXiv Detail & Related papers (2022-04-25T17:57:10Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - Exploratory State Representation Learning [63.942632088208505]
We propose a new approach called XSRL (eXploratory State Representation Learning) to solve the problems of exploration and SRL in parallel.
On one hand, it jointly learns compact state representations and a state transition estimator which is used to remove unexploitable information from the representations.
On the other hand, it continuously trains an inverse model, and adds to the prediction error of this model a $k$-step learning progress bonus to form the objective of a discovery policy.
arXiv Detail & Related papers (2021-09-28T10:11:07Z) - Odd-One-Out Representation Learning [1.6822770693792826]
We show that a weakly-supervised downstream task based on odd-one-out observations is suitable for model selection.
We also show that a bespoke metric-learning VAE model which performs highly on this task also out-performs other standard unsupervised and a weakly-supervised disentanglement model.
arXiv Detail & Related papers (2020-12-14T22:01:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.