Data-Efficient Reinforcement Learning with Self-Predictive
Representations
- URL: http://arxiv.org/abs/2007.05929v4
- Date: Thu, 20 May 2021 09:15:57 GMT
- Title: Data-Efficient Reinforcement Learning with Self-Predictive
Representations
- Authors: Max Schwarzer, Ankesh Anand, Rishab Goel, R Devon Hjelm, Aaron
Courville, Philip Bachman
- Abstract summary: We train an agent to predict its own latent state representations multiple steps into the future.
On its own, this future prediction objective outperforms prior methods for sample-efficient deep RL from pixels.
Our full self-supervised objective, which combines future prediction and data augmentation, achieves a median human-normalized score of 0.415 on Atari.
- Score: 21.223069189953037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep reinforcement learning excels at solving tasks where large amounts
of data can be collected through virtually unlimited interaction with the
environment, learning from limited interaction remains a key challenge. We
posit that an agent can learn more efficiently if we augment reward
maximization with self-supervised objectives based on structure in its visual
input and sequential interaction with the environment. Our method,
Self-Predictive Representations(SPR), trains an agent to predict its own latent
state representations multiple steps into the future. We compute target
representations for future states using an encoder which is an exponential
moving average of the agent's parameters and we make predictions using a
learned transition model. On its own, this future prediction objective
outperforms prior methods for sample-efficient deep RL from pixels. We further
improve performance by adding data augmentation to the future prediction loss,
which forces the agent's representations to be consistent across multiple views
of an observation. Our full self-supervised objective, which combines future
prediction and data augmentation, achieves a median human-normalized score of
0.415 on Atari in a setting limited to 100k steps of environment interaction,
which represents a 55% relative improvement over the previous state-of-the-art.
Notably, even in this limited data regime, SPR exceeds expert human scores on 7
out of 26 games. The code associated with this work is available at
https://github.com/mila-iqia/spr
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.