An Empirical Investigation of Representation Learning for Imitation
- URL: http://arxiv.org/abs/2205.07886v1
- Date: Mon, 16 May 2022 11:23:42 GMT
- Title: An Empirical Investigation of Representation Learning for Imitation
- Authors: Xin Chen, Sam Toyer, Cody Wild, Scott Emmons, Ian Fischer, Kuang-Huei
Lee, Neel Alex, Steven H Wang, Ping Luo, Stuart Russell, Pieter Abbeel, Rohin
Shah
- Abstract summary: Recent work in vision, reinforcement learning, and NLP has shown that auxiliary representation learning objectives can reduce the need for large amounts of expensive, task-specific data.
We propose a modular framework for constructing representation learning algorithms, then use our framework to evaluate the utility of representation learning for imitation.
- Score: 76.48784376425911
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Imitation learning often needs a large demonstration set in order to handle
the full range of situations that an agent might find itself in during
deployment. However, collecting expert demonstrations can be expensive. Recent
work in vision, reinforcement learning, and NLP has shown that auxiliary
representation learning objectives can reduce the need for large amounts of
expensive, task-specific data. Our Empirical Investigation of Representation
Learning for Imitation (EIRLI) investigates whether similar benefits apply to
imitation learning. We propose a modular framework for constructing
representation learning algorithms, then use our framework to evaluate the
utility of representation learning for imitation across several environment
suites. In the settings we evaluate, we find that existing algorithms for
image-based representation learning provide limited value relative to a
well-tuned baseline with image augmentations. To explain this result, we
investigate differences between imitation learning and other settings where
representation learning has provided significant benefit, such as image
classification. Finally, we release a well-documented codebase which both
replicates our findings and provides a modular framework for creating new
representation learning algorithms out of reusable components.
Related papers
- A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Self-Supervised Visual Representation Learning with Semantic Grouping [50.14703605659837]
We tackle the problem of learning visual representations from unlabeled scene-centric data.
We propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning.
arXiv Detail & Related papers (2022-05-30T17:50:59Z) - Empirical Evaluation and Theoretical Analysis for Representation
Learning: A Survey [25.5633960013493]
representation learning enables us to automatically extract generic feature representations from a dataset to solve another machine learning task.
Recently, extracted feature representations by a representation learning algorithm and a simple predictor have exhibited state-of-the-art performance on several machine learning tasks.
arXiv Detail & Related papers (2022-04-18T09:18:47Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - Complementing Representation Deficiency in Few-shot Image
Classification: A Meta-Learning Approach [27.350615059290348]
We propose a meta-learning approach with complemented representations network (MCRNet) for few-shot image classification.
In particular, we embed a latent space, where latent codes are reconstructed with extra representation information to complement the representation deficiency.
Our end-to-end framework achieves the state-of-the-art performance in image classification on three standard few-shot learning datasets.
arXiv Detail & Related papers (2020-07-21T13:25:54Z) - Provable Representation Learning for Imitation Learning via Bi-level
Optimization [60.059520774789654]
A common strategy in modern learning systems is to learn a representation that is useful for many tasks.
We study this strategy in the imitation learning setting for Markov decision processes (MDPs) where multiple experts' trajectories are available.
We instantiate this framework for the imitation learning settings of behavior cloning and observation-alone.
arXiv Detail & Related papers (2020-02-24T21:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.