Pretrained Encoders are All You Need
- URL: http://arxiv.org/abs/2106.05139v1
- Date: Wed, 9 Jun 2021 15:27:25 GMT
- Title: Pretrained Encoders are All You Need
- Authors: Mina Khan, P Srivatsa, Advait Rane, Shriram Chenniappa, Rishabh Anand,
Sherjil Ozair, and Pattie Maes
- Abstract summary: Self-supervised models have shown successful transfer to diverse settings.
We also explore fine-tuning pretrained representations with self-supervised techniques.
Our results show that pretrained representations are at par with state-of-the-art self-supervised methods trained on domain-specific data.
- Score: 23.171881382391074
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-efficiency and generalization are key challenges in deep learning and
deep reinforcement learning as many models are trained on large-scale,
domain-specific, and expensive-to-label datasets. Self-supervised models
trained on large-scale uncurated datasets have shown successful transfer to
diverse settings. We investigate using pretrained image representations and
spatio-temporal attention for state representation learning in Atari. We also
explore fine-tuning pretrained representations with self-supervised techniques,
i.e., contrastive predictive coding, spatio-temporal contrastive learning, and
augmentations. Our results show that pretrained representations are at par with
state-of-the-art self-supervised methods trained on domain-specific data.
Pretrained representations, thus, yield data and compute-efficient state
representations. https://github.com/PAL-ML/PEARL_v1
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.