Nearly Horizon-Free Offline Reinforcement Learning
- URL: http://arxiv.org/abs/2103.14077v1
- Date: Thu, 25 Mar 2021 18:52:17 GMT
- Title: Nearly Horizon-Free Offline Reinforcement Learning
- Authors: Tongzheng Ren, Jialian Li, Bo Dai, Simon S. Du, Sujay Sanghavi
- Abstract summary: We revisit offline reinforcement learning on episodic time-homogeneous Markov Decision Processes with $S$ states, $A$ actions and planning horizon $H$.
We obtain the first set of nearly $H$-free sample complexity bounds for evaluation and planning using the empirical MDPs.
- Score: 97.36751930393245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We revisit offline reinforcement learning on episodic time-homogeneous
tabular Markov Decision Processes with $S$ states, $A$ actions and planning
horizon $H$. Given the collected $N$ episodes data with minimum cumulative
reaching probability $d_m$, we obtain the first set of nearly $H$-free sample
complexity bounds for evaluation and planning using the empirical MDPs: 1.For
the offline evaluation, we obtain an $\tilde{O}\left(\sqrt{\frac{1}{Nd_m}}
\right)$ error rate, which matches the lower bound and does not have additional
dependency on $\poly\left(S,A\right)$ in higher-order term, that is different
from previous works~\citep{yin2020near,yin2020asymptotically}. 2.For the
offline policy optimization, we obtain an $\tilde{O}\left(\sqrt{\frac{1}{Nd_m}}
+ \frac{S}{Nd_m}\right)$ error rate, improving upon the best known result by
\cite{cui2020plug}, which has additional $H$ and $S$ factors in the main term.
Furthermore, this bound approaches the
$\Omega\left(\sqrt{\frac{1}{Nd_m}}\right)$ lower bound up to logarithmic
factors and a high-order term. To the best of our knowledge, these are the
first set of nearly horizon-free bounds in offline reinforcement learning.
Related papers
- Near-Optimal Regret Bounds for Multi-batch Reinforcement Learning [54.806166861456035]
We study the episodic reinforcement learning (RL) problem modeled by finite-horizon Markov Decision Processes (MDPs) with constraint on the number of batches.
We design a computational efficient algorithm to achieve near-optimal regret of $tildeO(sqrtSAH3Kln (1/delta))$tildeO(cdot) hides logarithmic terms of $(S,A,H,K)$ in $K$ episodes.
Our technical contribution are two-fold: 1) a near-optimal design scheme to explore
arXiv Detail & Related papers (2022-10-15T09:22:22Z) - Horizon-Free Reinforcement Learning in Polynomial Time: the Power of
Stationary Policies [88.75843804630772]
We design an algorithm that achieves an $Oleft(mathrmpoly(S,A,log K)sqrtKright)$ regret in contrast to existing bounds.
Our result relies on a sequence of new structural lemmas establishing the approximation power, stability, and concentration property of stationary policies.
arXiv Detail & Related papers (2022-03-24T08:14:12Z) - Optimal SQ Lower Bounds for Learning Halfspaces with Massart Noise [9.378684220920562]
tightest statistical query (SQ) lower bounds for learnining halfspaces in the presence of Massart noise.
We show that for arbitrary $eta in [0,1/2]$ every SQ algorithm achieving misclassification error better than $eta$ requires queries of superpolynomial accuracy.
arXiv Detail & Related papers (2022-01-24T17:33:19Z) - Differentially Private $\ell_1$-norm Linear Regression with Heavy-tailed
Data [22.233705161499273]
We focus on the $ell_1$-norm linear regression in the $epsilon$-DP model.
We show that it is possible to achieve an upper bound of $tildeO(sqrtfracdnepsilon)$ with high probability.
Our algorithms can also be extended to more relaxed cases where only each coordinate of the data has bounded moments.
arXiv Detail & Related papers (2022-01-10T08:17:05Z) - Gap-Dependent Unsupervised Exploration for Reinforcement Learning [40.990467706237396]
We present an efficient algorithm for task-agnostic reinforcement learning.
The algorithm takes only $widetildemathcalO (1/epsilon cdot (H3SA / rho + H4 S2 A) )$ episodes of exploration.
We show that, information-theoretically, this bound is nearly tight for $rho Theta (1/(HS))$ and $H>1$.
arXiv Detail & Related papers (2021-08-11T20:42:46Z) - $Q$-learning with Logarithmic Regret [60.24952657636464]
We prove that an optimistic $Q$-learning enjoys a $mathcalOleft(fracSAcdot mathrmpolyleft(Hright)Delta_minlogleft(SATright)right)$ cumulative regret bound, where $S$ is the number of states, $A$ is the number of actions, $H$ is the planning horizon, $T$ is the total number of steps, and $Delta_min$ is the minimum sub-optimality gap.
arXiv Detail & Related papers (2020-06-16T13:01:33Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.