Data-Efficient Reinforcement Learning with Self-Predictive
Representations
- URL: http://arxiv.org/abs/2007.05929v4
- Date: Thu, 20 May 2021 09:15:57 GMT
- Title: Data-Efficient Reinforcement Learning with Self-Predictive
Representations
- Authors: Max Schwarzer, Ankesh Anand, Rishab Goel, R Devon Hjelm, Aaron
Courville, Philip Bachman
- Abstract summary: We train an agent to predict its own latent state representations multiple steps into the future.
On its own, this future prediction objective outperforms prior methods for sample-efficient deep RL from pixels.
Our full self-supervised objective, which combines future prediction and data augmentation, achieves a median human-normalized score of 0.415 on Atari.
- Score: 21.223069189953037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep reinforcement learning excels at solving tasks where large amounts
of data can be collected through virtually unlimited interaction with the
environment, learning from limited interaction remains a key challenge. We
posit that an agent can learn more efficiently if we augment reward
maximization with self-supervised objectives based on structure in its visual
input and sequential interaction with the environment. Our method,
Self-Predictive Representations(SPR), trains an agent to predict its own latent
state representations multiple steps into the future. We compute target
representations for future states using an encoder which is an exponential
moving average of the agent's parameters and we make predictions using a
learned transition model. On its own, this future prediction objective
outperforms prior methods for sample-efficient deep RL from pixels. We further
improve performance by adding data augmentation to the future prediction loss,
which forces the agent's representations to be consistent across multiple views
of an observation. Our full self-supervised objective, which combines future
prediction and data augmentation, achieves a median human-normalized score of
0.415 on Atari in a setting limited to 100k steps of environment interaction,
which represents a 55% relative improvement over the previous state-of-the-art.
Notably, even in this limited data regime, SPR exceeds expert human scores on 7
out of 26 games. The code associated with this work is available at
https://github.com/mila-iqia/spr
Related papers
- VEDIT: Latent Prediction Architecture For Procedural Video Representation Learning [59.68917139718813]
We show that a strong off-the-shelf frozen pretrained visual encoder can achieve state-of-the-art (SoTA) performance in forecasting and procedural planning.
By conditioning on frozen clip-level embeddings from observed steps to predict the actions of unseen steps, our prediction model is able to learn robust representations for forecasting.
arXiv Detail & Related papers (2024-10-04T14:52:09Z) - Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Interpretable Long Term Waypoint-Based Trajectory Prediction Model [1.4778851751964937]
We study the impact of adding a long-term goal on the performance of a trajectory prediction framework.
We present an interpretable long term waypoint-driven prediction framework (WayDCM)
arXiv Detail & Related papers (2023-12-11T09:10:22Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - An Unbiased Look at Datasets for Visuo-Motor Pre-Training [20.094244564603184]
We show that dataset choice is just as important to this paradigm's success.
We observe that traditional vision datasets are surprisingly competitive options for visuo-motor representation learning.
We show that common simulation benchmarks are not a reliable proxy for real world performance.
arXiv Detail & Related papers (2023-10-13T17:59:02Z) - Human trajectory prediction using LSTM with Attention mechanism [0.0]
We use attention scores to determine which parts of the input data the model should focus on when making predictions.
We show that our modified algorithm performs better than the Social LSTM in predicting the future trajectory of pedestrians in crowded spaces.
arXiv Detail & Related papers (2023-09-01T08:35:24Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Conditioned Human Trajectory Prediction using Iterative Attention Blocks [70.36888514074022]
We present a simple yet effective pedestrian trajectory prediction model aimed at pedestrians positions prediction in urban-like environments.
Our model is a neural-based architecture that can run several layers of attention blocks and transformers in an iterative sequential fashion.
We show that without explicit introduction of social masks, dynamical models, social pooling layers, or complicated graph-like structures, it is possible to produce on par results with SoTA models.
arXiv Detail & Related papers (2022-06-29T07:49:48Z) - Deep Reinforcement and InfoMax Learning [32.426674181365456]
We introduce an objective based on Deep InfoMax which trains the agent to predict the future by maximizing the mutual information between its internal representation of successive timesteps.
We test our approach in several synthetic settings, where it successfully learns representations that are predictive of the future.
arXiv Detail & Related papers (2020-06-12T14:19:46Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.