Offline Reinforcement Learning Under Value and Density-Ratio
Realizability: the Power of Gaps
- URL: http://arxiv.org/abs/2203.13935v1
- Date: Fri, 25 Mar 2022 23:33:38 GMT
- Title: Offline Reinforcement Learning Under Value and Density-Ratio
Realizability: the Power of Gaps
- Authors: Jinglin Chen, Nan Jiang
- Abstract summary: We provide guarantees to a pessimistic algorithm based on a version space formed by marginalized importance sampling.
Our work is the first to identify the utility and the novel mechanism of gap assumptions in offline reinforcement learning.
- Score: 15.277483173402128
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a challenging theoretical problem in offline reinforcement
learning (RL): obtaining sample-efficiency guarantees with a dataset lacking
sufficient coverage, under only realizability-type assumptions for the function
approximators. While the existing theory has addressed learning under
realizability and under non-exploratory data separately, no work has been able
to address both simultaneously (except for a concurrent work which we compare
to in detail). Under an additional gap assumption, we provide guarantees to a
simple pessimistic algorithm based on a version space formed by marginalized
importance sampling, and the guarantee only requires the data to cover the
optimal policy and the function classes to realize the optimal value and
density-ratio functions. While similar gap assumptions have been used in other
areas of RL theory, our work is the first to identify the utility and the novel
mechanism of gap assumptions in offline RL.
Related papers
- Provable Offline Preference-Based Reinforcement Learning [95.00042541409901]
We investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback.
We consider the general reward setting where the reward can be defined over the whole trajectory.
We introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability.
arXiv Detail & Related papers (2023-05-24T07:11:26Z) - Offline Reinforcement Learning with Additional Covering Distributions [0.0]
We study learning optimal policies from a logged dataset, i.e., offline RL, with function approximation.
We show that sample-efficient offline RL for general MDPs is possible with only a partial coverage dataset and weak realizable function classes.
arXiv Detail & Related papers (2023-05-22T03:31:03Z) - Revisiting the Linear-Programming Framework for Offline RL with General
Function Approximation [24.577243536475233]
offline reinforcement learning (RL) concerns pursuing an optimal policy for sequential decision-making from a pre-collected dataset.
Recent theoretical progress has focused on developing sample-efficient offline RL algorithms with various relaxed assumptions on data coverage and function approximators.
We revisit the linear-programming framework for offline RL, and advance the existing results in several aspects.
arXiv Detail & Related papers (2022-12-28T15:28:12Z) - Optimal Conservative Offline RL with General Function Approximation via
Augmented Lagrangian [18.2080757218886]
offline reinforcement learning (RL) refers to decision-making from a previously-collected dataset of interactions.
We present the first set of offline RL algorithms that are statistically optimal and practical under general function approximation and single-policy concentrability.
arXiv Detail & Related papers (2022-11-01T19:28:48Z) - The Role of Coverage in Online Reinforcement Learning [72.01066664756986]
We show that the mere existence of a data distribution with good coverage can enable sample-efficient online RL.
Existing complexity measures for online RL, including Bellman rank and Bellman-Eluder dimension, fail to optimally capture coverability.
We propose a new complexity measure, the sequential extrapolation coefficient, to provide a unification.
arXiv Detail & Related papers (2022-10-09T03:50:05Z) - Pessimistic Minimax Value Iteration: Provably Efficient Equilibrium
Learning from Offline Datasets [101.5329678997916]
We study episodic two-player zero-sum Markov games (MGs) in the offline setting.
The goal is to find an approximate Nash equilibrium (NE) policy pair based on a dataset collected a priori.
arXiv Detail & Related papers (2022-02-15T15:39:30Z) - Towards Deployment-Efficient Reinforcement Learning: Lower Bound and
Optimality [141.89413461337324]
Deployment efficiency is an important criterion for many real-world applications of reinforcement learning (RL)
We propose a theoretical formulation for deployment-efficient RL (DE-RL) from an "optimization with constraints" perspective.
arXiv Detail & Related papers (2022-02-14T01:31:46Z) - Offline Reinforcement Learning with Realizability and Single-policy
Concentrability [40.15976281104956]
Sample-efficiency guarantees for offline reinforcement learning often rely on strong assumptions on both the function classes and the data coverage.
We analyze a simple algorithm based on primal-dual MDPs, where the dual variables are modeled using offline function against offline data.
arXiv Detail & Related papers (2022-02-09T18:51:24Z) - Offline Reinforcement Learning: Fundamental Barriers for Value Function
Approximation [74.3002974673248]
We consider the offline reinforcement learning problem, where the aim is to learn a decision making policy from logged data.
offline RL is becoming increasingly relevant in practice, because online data collection is well suited to safety-critical domains.
Our results show that sample-efficient offline reinforcement learning requires either restrictive coverage conditions or representation conditions that go beyond complexity learning.
arXiv Detail & Related papers (2021-11-21T23:22:37Z) - Instabilities of Offline RL with Pre-Trained Neural Representation [127.89397629569808]
In offline reinforcement learning (RL), we seek to utilize offline data to evaluate (or learn) policies in scenarios where the data are collected from a distribution that substantially differs from that of the target policy to be evaluated.
Recent theoretical advances have shown that such sample-efficient offline RL is indeed possible provided certain strong representational conditions hold.
This work studies these issues from an empirical perspective to gauge how stable offline RL methods are.
arXiv Detail & Related papers (2021-03-08T18:06:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.