Instabilities of Offline RL with Pre-Trained Neural Representation
- URL: http://arxiv.org/abs/2103.04947v1
- Date: Mon, 8 Mar 2021 18:06:44 GMT
- Title: Instabilities of Offline RL with Pre-Trained Neural Representation
- Authors: Ruosong Wang, Yifan Wu, Ruslan Salakhutdinov, Sham M. Kakade
- Abstract summary: In offline reinforcement learning (RL), we seek to utilize offline data to evaluate (or learn) policies in scenarios where the data are collected from a distribution that substantially differs from that of the target policy to be evaluated.
Recent theoretical advances have shown that such sample-efficient offline RL is indeed possible provided certain strong representational conditions hold.
This work studies these issues from an empirical perspective to gauge how stable offline RL methods are.
- Score: 127.89397629569808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In offline reinforcement learning (RL), we seek to utilize offline data to
evaluate (or learn) policies in scenarios where the data are collected from a
distribution that substantially differs from that of the target policy to be
evaluated. Recent theoretical advances have shown that such sample-efficient
offline RL is indeed possible provided certain strong representational
conditions hold, else there are lower bounds exhibiting exponential error
amplification (in the problem horizon) unless the data collection distribution
has only a mild distribution shift relative to the target policy. This work
studies these issues from an empirical perspective to gauge how stable offline
RL methods are. In particular, our methodology explores these ideas when using
features from pre-trained neural networks, in the hope that these
representations are powerful enough to permit sample efficient offline RL.
Through extensive experiments on a range of tasks, we see that substantial
error amplification does occur even when using such pre-trained representations
(trained on the same task itself); we find offline RL is stable only under
extremely mild distribution shift. The implications of these results, both from
a theoretical and an empirical perspective, are that successful offline RL
(where we seek to go beyond the low distribution shift regime) requires
substantially stronger conditions beyond those which suffice for successful
supervised learning.
Related papers
- Bridging Distributionally Robust Learning and Offline RL: An Approach to
Mitigate Distribution Shift and Partial Data Coverage [32.578787778183546]
offline reinforcement learning (RL) algorithms learn optimal polices using historical (offline) data.
One of the main challenges in offline RL is the distribution shift.
We propose two offline RL algorithms using the distributionally robust learning (DRL) framework.
arXiv Detail & Related papers (2023-10-27T19:19:30Z) - Offline Reinforcement Learning with Imbalanced Datasets [23.454333727200623]
A real-world offline reinforcement learning (RL) dataset is often imbalanced over the state space due to the challenge of exploration or safety considerations.
We show that typically offline RL methods based on distributional constraints, such as conservative Q-learning (CQL), are ineffective in extracting policies under the imbalanced dataset.
Inspired by natural intelligence, we propose a novel offline RL method that utilizes the augmentation of CQL with a retrieval process to recall past related experiences.
arXiv Detail & Related papers (2023-07-06T03:22:19Z) - Leveraging Factored Action Spaces for Efficient Offline Reinforcement
Learning in Healthcare [38.42691031505782]
We propose a form of linear Q-function decomposition induced by factored action spaces.
Our approach can help an agent make more accurate inferences within underexplored regions of the state-action space.
arXiv Detail & Related papers (2023-05-02T19:13:10Z) - The Role of Coverage in Online Reinforcement Learning [72.01066664756986]
We show that the mere existence of a data distribution with good coverage can enable sample-efficient online RL.
Existing complexity measures for online RL, including Bellman rank and Bellman-Eluder dimension, fail to optimally capture coverability.
We propose a new complexity measure, the sequential extrapolation coefficient, to provide a unification.
arXiv Detail & Related papers (2022-10-09T03:50:05Z) - On the Role of Discount Factor in Offline Reinforcement Learning [25.647624787936028]
The discount factor, $gamma$, plays a vital role in improving online RL sample efficiency and estimation accuracy.
This paper examines two distinct effects of $gamma$ in offline RL with theoretical analysis.
The results show that the discount factor plays an essential role in the performance of offline RL algorithms.
arXiv Detail & Related papers (2022-06-07T15:22:42Z) - Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement
Learning [125.8224674893018]
Offline Reinforcement Learning (RL) aims to learn policies from previously collected datasets without exploring the environment.
Applying off-policy algorithms to offline RL usually fails due to the extrapolation error caused by the out-of-distribution (OOD) actions.
We propose Pessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven offline algorithm without explicit policy constraints.
arXiv Detail & Related papers (2022-02-23T15:27:16Z) - Offline Reinforcement Learning: Fundamental Barriers for Value Function
Approximation [74.3002974673248]
We consider the offline reinforcement learning problem, where the aim is to learn a decision making policy from logged data.
offline RL is becoming increasingly relevant in practice, because online data collection is well suited to safety-critical domains.
Our results show that sample-efficient offline reinforcement learning requires either restrictive coverage conditions or representation conditions that go beyond complexity learning.
arXiv Detail & Related papers (2021-11-21T23:22:37Z) - Uncertainty-Based Offline Reinforcement Learning with Diversified
Q-Ensemble [16.92791301062903]
We propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution.
Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning.
arXiv Detail & Related papers (2021-10-04T16:40:13Z) - What are the Statistical Limits of Offline RL with Linear Function
Approximation? [70.33301077240763]
offline reinforcement learning seeks to utilize offline (observational) data to guide the learning of sequential decision making strategies.
This work focuses on the basic question of what are necessary representational and distributional conditions that permit provable sample-efficient offline reinforcement learning.
arXiv Detail & Related papers (2020-10-22T17:32:13Z) - D4RL: Datasets for Deep Data-Driven Reinforcement Learning [119.49182500071288]
We introduce benchmarks specifically designed for the offline setting, guided by key properties of datasets relevant to real-world applications of offline RL.
By moving beyond simple benchmark tasks and data collected by partially-trained RL agents, we reveal important and unappreciated deficiencies of existing algorithms.
arXiv Detail & Related papers (2020-04-15T17:18:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.