Representation Learning for Out-Of-Distribution Generalization in
Reinforcement Learning
- URL: http://arxiv.org/abs/2107.05686v1
- Date: Mon, 12 Jul 2021 18:49:48 GMT
- Title: Representation Learning for Out-Of-Distribution Generalization in
Reinforcement Learning
- Authors: Andrea Dittadi, Frederik Tr\"auble, Manuel W\"uthrich, Felix Widmaier,
Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard
Sch\"olkopf, Stefan Bauer
- Abstract summary: This paper aims to establish the first systematic characterization of the usefulness of learned representations for real-world downstream tasks.
By training over 10,000 reinforcement learning policies, we extensively evaluate to what extent different representation properties affect out-of-distribution generalization.
We demonstrate zero-shot transfer of these policies from simulation to the real world, without any domain randomization or fine-tuning.
- Score: 39.21650402977466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning data representations that are useful for various downstream tasks is
a cornerstone of artificial intelligence. While existing methods are typically
evaluated on downstream tasks such as classification or generative image
quality, we propose to assess representations through their usefulness in
downstream control tasks, such as reaching or pushing objects. By training over
10,000 reinforcement learning policies, we extensively evaluate to what extent
different representation properties affect out-of-distribution (OOD)
generalization. Finally, we demonstrate zero-shot transfer of these policies
from simulation to the real world, without any domain randomization or
fine-tuning. This paper aims to establish the first systematic characterization
of the usefulness of learned representations for real-world OOD downstream
tasks.
Related papers
- Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Generalizable Imitation Learning Through Pre-Trained Representations [19.98418419179064]
We introduce BC-ViT, an imitation learning algorithm that leverages rich DINO pre-trained Visual Transformer (ViT) patch-level embeddings to obtain better generalization when learning through demonstrations.
Our learner sees the world by clustering appearance features into semantic concepts, forming stable keypoints that generalize across a wide range of appearance variations and object types.
arXiv Detail & Related papers (2023-11-15T20:15:51Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - Invariance is Key to Generalization: Examining the Role of
Representation in Sim-to-Real Transfer for Visual Navigation [35.01394611106655]
Key to generalization is representations that are rich enough to capture all task-relevant information.
We experimentally study such a representation for visual navigation.
We show that our representation reduces the A-distance between the training and test domains.
arXiv Detail & Related papers (2023-10-23T15:15:19Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Reinforcement Learning with Prototypical Representations [114.35801511501639]
Proto-RL is a self-supervised framework that ties representation learning with exploration through prototypical representations.
These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations.
This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.
arXiv Detail & Related papers (2021-02-22T18:56:34Z) - Fairness by Learning Orthogonal Disentangled Representations [50.82638766862974]
We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
arXiv Detail & Related papers (2020-03-12T11:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.