Offline Reinforcement Learning: Fundamental Barriers for Value Function
Approximation
- URL: http://arxiv.org/abs/2111.10919v1
- Date: Sun, 21 Nov 2021 23:22:37 GMT
- Title: Offline Reinforcement Learning: Fundamental Barriers for Value Function
Approximation
- Authors: Dylan J. Foster, Akshay Krishnamurthy, David Simchi-Levi, Yunzong Xu
- Abstract summary: We consider the offline reinforcement learning problem, where the aim is to learn a decision making policy from logged data.
offline RL is becoming increasingly relevant in practice, because online data collection is well suited to safety-critical domains.
Our results show that sample-efficient offline reinforcement learning requires either restrictive coverage conditions or representation conditions that go beyond complexity learning.
- Score: 74.3002974673248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the offline reinforcement learning problem, where the aim is to
learn a decision making policy from logged data. Offline RL -- particularly
when coupled with (value) function approximation to allow for generalization in
large or continuous state spaces -- is becoming increasingly relevant in
practice, because it avoids costly and time-consuming online data collection
and is well suited to safety-critical domains. Existing sample complexity
guarantees for offline value function approximation methods typically require
both (1) distributional assumptions (i.e., good coverage) and (2)
representational assumptions (i.e., ability to represent some or all $Q$-value
functions) stronger than what is required for supervised learning. However, the
necessity of these conditions and the fundamental limits of offline RL are not
well understood in spite of decades of research. This led Chen and Jiang (2019)
to conjecture that concentrability (the most standard notion of coverage) and
realizability (the weakest representation condition) alone are not sufficient
for sample-efficient offline RL. We resolve this conjecture in the positive by
proving that in general, even if both concentrability and realizability are
satisfied, any algorithm requires sample complexity polynomial in the size of
the state space to learn a non-trivial policy.
Our results show that sample-efficient offline reinforcement learning
requires either restrictive coverage conditions or representation conditions
that go beyond supervised learning, and highlight a phenomenon called
over-coverage which serves as a fundamental barrier for offline value function
approximation methods. A consequence of our results for reinforcement learning
with linear function approximation is that the separation between online and
offline RL can be arbitrarily large, even in constant dimension.
Related papers
- Is Value Learning Really the Main Bottleneck in Offline RL? [70.54708989409409]
We show that the choice of a policy extraction algorithm significantly affects the performance and scalability of offline RL.
We propose two simple test-time policy improvement methods and show that these methods lead to better performance.
arXiv Detail & Related papers (2024-06-13T17:07:49Z) - Pessimistic Nonlinear Least-Squares Value Iteration for Offline Reinforcement Learning [53.97335841137496]
We propose an oracle-efficient algorithm, dubbed Pessimistic Least-Square Value Iteration (PNLSVI) for offline RL with non-linear function approximation.
Our algorithm enjoys a regret bound that has a tight dependency on the function class complexity and achieves minimax optimal instance-dependent regret when specialized to linear function approximation.
arXiv Detail & Related papers (2023-10-02T17:42:01Z) - Leveraging Factored Action Spaces for Efficient Offline Reinforcement
Learning in Healthcare [38.42691031505782]
We propose a form of linear Q-function decomposition induced by factored action spaces.
Our approach can help an agent make more accurate inferences within underexplored regions of the state-action space.
arXiv Detail & Related papers (2023-05-02T19:13:10Z) - Revisiting the Linear-Programming Framework for Offline RL with General
Function Approximation [24.577243536475233]
offline reinforcement learning (RL) concerns pursuing an optimal policy for sequential decision-making from a pre-collected dataset.
Recent theoretical progress has focused on developing sample-efficient offline RL algorithms with various relaxed assumptions on data coverage and function approximators.
We revisit the linear-programming framework for offline RL, and advance the existing results in several aspects.
arXiv Detail & Related papers (2022-12-28T15:28:12Z) - Optimal Conservative Offline RL with General Function Approximation via
Augmented Lagrangian [18.2080757218886]
offline reinforcement learning (RL) refers to decision-making from a previously-collected dataset of interactions.
We present the first set of offline RL algorithms that are statistically optimal and practical under general function approximation and single-policy concentrability.
arXiv Detail & Related papers (2022-11-01T19:28:48Z) - The Role of Coverage in Online Reinforcement Learning [72.01066664756986]
We show that the mere existence of a data distribution with good coverage can enable sample-efficient online RL.
Existing complexity measures for online RL, including Bellman rank and Bellman-Eluder dimension, fail to optimally capture coverability.
We propose a new complexity measure, the sequential extrapolation coefficient, to provide a unification.
arXiv Detail & Related papers (2022-10-09T03:50:05Z) - Offline Reinforcement Learning Under Value and Density-Ratio
Realizability: the Power of Gaps [15.277483173402128]
We provide guarantees to a pessimistic algorithm based on a version space formed by marginalized importance sampling.
Our work is the first to identify the utility and the novel mechanism of gap assumptions in offline reinforcement learning.
arXiv Detail & Related papers (2022-03-25T23:33:38Z) - Instabilities of Offline RL with Pre-Trained Neural Representation [127.89397629569808]
In offline reinforcement learning (RL), we seek to utilize offline data to evaluate (or learn) policies in scenarios where the data are collected from a distribution that substantially differs from that of the target policy to be evaluated.
Recent theoretical advances have shown that such sample-efficient offline RL is indeed possible provided certain strong representational conditions hold.
This work studies these issues from an empirical perspective to gauge how stable offline RL methods are.
arXiv Detail & Related papers (2021-03-08T18:06:44Z) - What are the Statistical Limits of Offline RL with Linear Function
Approximation? [70.33301077240763]
offline reinforcement learning seeks to utilize offline (observational) data to guide the learning of sequential decision making strategies.
This work focuses on the basic question of what are necessary representational and distributional conditions that permit provable sample-efficient offline reinforcement learning.
arXiv Detail & Related papers (2020-10-22T17:32:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.