On the Statistical Efficiency of Reward-Free Exploration in Non-Linear
RL
- URL: http://arxiv.org/abs/2206.10770v1
- Date: Tue, 21 Jun 2022 23:17:43 GMT
- Title: On the Statistical Efficiency of Reward-Free Exploration in Non-Linear
RL
- Authors: Jinglin Chen, Aditya Modi, Akshay Krishnamurthy, Nan Jiang, Alekh
Agarwal
- Abstract summary: We study reward-free reinforcement learning (RL) under general non-linear function approximation.
We propose the RFOLIVE (Reward-Free OLIVE) algorithm for sample-efficient reward-free exploration.
- Score: 54.55689632571575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study reward-free reinforcement learning (RL) under general non-linear
function approximation, and establish sample efficiency and hardness results
under various standard structural assumptions. On the positive side, we propose
the RFOLIVE (Reward-Free OLIVE) algorithm for sample-efficient reward-free
exploration under minimal structural assumptions, which covers the previously
studied settings of linear MDPs (Jin et al., 2020b), linear completeness
(Zanette et al., 2020b) and low-rank MDPs with unknown representation (Modi et
al., 2021). Our analyses indicate that the explorability or reachability
assumptions, previously made for the latter two settings, are not necessary
statistically for reward-free exploration. On the negative side, we provide a
statistical hardness result for both reward-free and reward-aware exploration
under linear completeness assumptions when the underlying features are unknown,
showing an exponential separation between low-rank and linear completeness
settings.
Related papers
- Learning Off-policy with Model-based Intrinsic Motivation For Active Online Exploration [15.463313629574111]
This paper investigates how to achieve sample-efficient exploration in continuous control tasks.
We introduce an RL algorithm that incorporates a predictive model and off-policy learning elements.
We derive an intrinsic reward without incurring parameters overhead.
arXiv Detail & Related papers (2024-03-31T11:39:11Z) - Offline Reinforcement Learning with Additional Covering Distributions [0.0]
We study learning optimal policies from a logged dataset, i.e., offline RL, with function approximation.
We show that sample-efficient offline RL for general MDPs is possible with only a partial coverage dataset and weak realizable function classes.
arXiv Detail & Related papers (2023-05-22T03:31:03Z) - Offline Reinforcement Learning with Instrumental Variables in Confounded
Markov Decision Processes [93.61202366677526]
We study the offline reinforcement learning (RL) in the face of unmeasured confounders.
We propose various policy learning methods with the finite-sample suboptimality guarantee of finding the optimal in-class policy.
arXiv Detail & Related papers (2022-09-18T22:03:55Z) - Pessimistic Q-Learning for Offline Reinforcement Learning: Towards
Optimal Sample Complexity [51.476337785345436]
We study a pessimistic variant of Q-learning in the context of finite-horizon Markov decision processes.
A variance-reduced pessimistic Q-learning algorithm is proposed to achieve near-optimal sample complexity.
arXiv Detail & Related papers (2022-02-28T15:39:36Z) - Solving Multistage Stochastic Linear Programming via Regularized Linear
Decision Rules: An Application to Hydrothermal Dispatch Planning [77.34726150561087]
We propose a novel regularization scheme for linear decision rules (LDR) based on the AdaSO (adaptive least absolute shrinkage and selection operator)
Experiments show that the overfit threat is non-negligible when using the classical non-regularized LDR to solve MSLP.
For the LHDP problem, our analysis highlights the following benefits of the proposed framework in comparison to the non-regularized benchmark.
arXiv Detail & Related papers (2021-10-07T02:36:14Z) - Provably Efficient Reward-Agnostic Navigation with Linear Value
Iteration [143.43658264904863]
We show how iteration under a more standard notion of low inherent Bellman error, typically employed in least-square value-style algorithms, can provide strong PAC guarantees on learning a near optimal value function.
We present a computationally tractable algorithm for the reward-free setting and show how it can be used to learn a near optimal policy for any (linear) reward function.
arXiv Detail & Related papers (2020-08-18T04:34:21Z) - On Reward-Free Reinforcement Learning with Linear Function Approximation [144.4210285338698]
Reward-free reinforcement learning (RL) is a framework which is suitable for both the batch RL setting and the setting where there are many reward functions of interest.
In this work, we give both positive and negative results for reward-free RL with linear function approximation.
arXiv Detail & Related papers (2020-06-19T17:59:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.