How Transferable are the Representations Learned by Deep Q Agents?
- URL: http://arxiv.org/abs/2002.10021v1
- Date: Mon, 24 Feb 2020 00:23:47 GMT
- Title: How Transferable are the Representations Learned by Deep Q Agents?
- Authors: Jacob Tyo and Zachary Lipton
- Abstract summary: We consider the source of Deep Reinforcement Learning's sample complexity.
We compare the benefits of transfer learning to learning a policy from scratch.
We find that benefits due to transfer are highly variable in general and non-symmetric across pairs of tasks.
- Score: 13.740174266824532
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, we consider the source of Deep Reinforcement Learning (DRL)'s
sample complexity, asking how much derives from the requirement of learning
useful representations of environment states and how much is due to the sample
complexity of learning a policy. While for DRL agents, the distinction between
representation and policy may not be clear, we seek new insight through a set
of transfer learning experiments. In each experiment, we retain some fraction
of layers trained on either the same game or a related game, comparing the
benefits of transfer learning to learning a policy from scratch. Interestingly,
we find that benefits due to transfer are highly variable in general and
non-symmetric across pairs of tasks. Our experiments suggest that perhaps
transfer from simpler environments can boost performance on more complex
downstream tasks and that the requirements of learning a useful representation
can range from negligible to the majority of the sample complexity, based on
the environment. Furthermore, we find that fine-tuning generally outperforms
training with the transferred layers frozen, confirming an insight first noted
in the classification setting.
Related papers
- Provable Benefit of Multitask Representation Learning in Reinforcement
Learning [46.11628795660159]
This paper theoretically characterizes the benefit of representation learning under the low-rank Markov decision process (MDP) model.
To the best of our knowledge, this is the first theoretical study that characterizes the benefit of representation learning in exploration-based reward-free multitask reinforcement learning.
arXiv Detail & Related papers (2022-06-13T04:29:02Z) - Chaos is a Ladder: A New Theoretical Understanding of Contrastive
Learning via Augmentation Overlap [64.60460828425502]
We propose a new guarantee on the downstream performance of contrastive learning.
Our new theory hinges on the insight that the support of different intra-class samples will become more overlapped under aggressive data augmentations.
We propose an unsupervised model selection metric ARC that aligns well with downstream accuracy.
arXiv Detail & Related papers (2022-03-25T05:36:26Z) - Exploratory State Representation Learning [63.942632088208505]
We propose a new approach called XSRL (eXploratory State Representation Learning) to solve the problems of exploration and SRL in parallel.
On one hand, it jointly learns compact state representations and a state transition estimator which is used to remove unexploitable information from the representations.
On the other hand, it continuously trains an inverse model, and adds to the prediction error of this model a $k$-step learning progress bonus to form the objective of a discovery policy.
arXiv Detail & Related papers (2021-09-28T10:11:07Z) - Fractional Transfer Learning for Deep Model-Based Reinforcement Learning [0.966840768820136]
Reinforcement learning (RL) is well known for requiring large amounts of data in order for RL agents to learn to perform complex tasks.
Recent progress in model-based RL allows agents to be much more data-efficient.
We present a simple alternative approach: fractional transfer learning.
arXiv Detail & Related papers (2021-08-14T12:44:42Z) - Reinforcement Learning with Prototypical Representations [114.35801511501639]
Proto-RL is a self-supervised framework that ties representation learning with exploration through prototypical representations.
These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations.
This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.
arXiv Detail & Related papers (2021-02-22T18:56:34Z) - When Is Generalizable Reinforcement Learning Tractable? [74.87383727210705]
We study the query complexity required to train RL agents that can generalize to multiple environments.
We introduce Strong Proximity, a structural condition which precisely characterizes the relative closeness of different environments.
We show that under a natural weakening of this condition, RL can require query complexity that is exponential in the horizon to generalize.
arXiv Detail & Related papers (2021-01-01T19:08:24Z) - Learning to Sample with Local and Global Contexts in Experience Replay
Buffer [135.94190624087355]
We propose a new learning-based sampling method that can compute the relative importance of transition.
We show that our framework can significantly improve the performance of various off-policy reinforcement learning methods.
arXiv Detail & Related papers (2020-07-14T21:12:56Z) - Pre-trained Word Embeddings for Goal-conditional Transfer Learning in
Reinforcement Learning [0.0]
We show how a pre-trained task-independent language model can make a goal-conditional RL agent more sample efficient.
We do this by facilitating transfer learning between different related tasks.
arXiv Detail & Related papers (2020-07-10T06:42:00Z) - Hierarchically Decoupled Imitation for Morphological Transfer [95.19299356298876]
We show that transferring learned information from a morphologically simpler agent can massively improve the sample efficiency of a more complex one.
First, we show that incentivizing a complex agent's low-level to imitate a simpler agent's low-level significantly improves zero-shot high-level transfer.
Second, we show that KL-regularized training of the high level stabilizes learning and prevents mode-collapse.
arXiv Detail & Related papers (2020-03-03T18:56:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.