Nearly Minimax Optimal Reward-free Reinforcement Learning
- URL: http://arxiv.org/abs/2010.05901v2
- Date: Fri, 23 Oct 2020 17:41:20 GMT
- Title: Nearly Minimax Optimal Reward-free Reinforcement Learning
- Authors: Zihan Zhang, Simon S. Du, Xiangyang Ji
- Abstract summary: We study the reward-free reinforcement learning framework, which is particularly suitable for batch reinforcement learning and scenarios where one needs policies for multiple reward functions.
We give a new efficient algorithm, textbfStaged textbfSampling + textbfTruncated textbfPlanning (algoname), which interacts with the environment at most $Oleft( fracS2Aepsilon2textpolylogleft(fracSAHepsilon2
- Score: 88.75843804630772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the reward-free reinforcement learning framework, which is
particularly suitable for batch reinforcement learning and scenarios where one
needs policies for multiple reward functions. This framework has two phases. In
the exploration phase, the agent collects trajectories by interacting with the
environment without using any reward signal. In the planning phase, the agent
needs to return a near-optimal policy for arbitrary reward functions. We give a
new efficient algorithm, \textbf{S}taged \textbf{S}ampling + \textbf{T}runcated
\textbf{P}lanning (\algoname), which interacts with the environment at most
$O\left(
\frac{S^2A}{\epsilon^2}\text{poly}\log\left(\frac{SAH}{\epsilon}\right)
\right)$ episodes in the exploration phase, and guarantees to output a
near-optimal policy for arbitrary reward functions in the planning phase. Here,
$S$ is the size of state space, $A$ is the size of action space, $H$ is the
planning horizon, and $\epsilon$ is the target accuracy relative to the total
reward. Notably, our sample complexity scales only \emph{logarithmically} with
$H$, in contrast to all existing results which scale \emph{polynomially} with
$H$. Furthermore, this bound matches the minimax lower bound
$\Omega\left(\frac{S^2A}{\epsilon^2}\right)$ up to logarithmic factors.
Our results rely on three new techniques : 1) A new sufficient condition for
the dataset to plan for an $\epsilon$-suboptimal policy; 2) A new way to plan
efficiently under the proposed condition using soft-truncated planning; 3)
Constructing extended MDP to maximize the truncated accumulative rewards
efficiently.
Related papers
- Sharper Model-free Reinforcement Learning for Average-reward Markov
Decision Processes [21.77276136591518]
We develop provably efficient model-free reinforcement learning (RL) algorithms for Markov Decision Processes (MDPs)
In the simulator setting, we propose a model-free RL algorithm that finds an $epsilon$-optimal policy using $widetildeO left(fracSAmathrmsp(h*)epsilon2+fracS2Amathrmsp(h*)epsilon2right)$ samples.
arXiv Detail & Related papers (2023-06-28T17:43:19Z) - Eluder-based Regret for Stochastic Contextual MDPs [43.19667415823089]
We present the E-UC$3$RL algorithm for regret minimization in Contextual Markov Decision Processes (CMDPs)
Our algorithm is efficient (assuming efficient offline regression oracles) and enjoys a regret guarantee of $ widetildeO(H3 sqrtT |S| |A|d_mathrmE(mathcalP)$.
arXiv Detail & Related papers (2022-11-27T20:38:47Z) - Near-Optimal Regret Bounds for Multi-batch Reinforcement Learning [54.806166861456035]
We study the episodic reinforcement learning (RL) problem modeled by finite-horizon Markov Decision Processes (MDPs) with constraint on the number of batches.
We design a computational efficient algorithm to achieve near-optimal regret of $tildeO(sqrtSAH3Kln (1/delta))$tildeO(cdot) hides logarithmic terms of $(S,A,H,K)$ in $K$ episodes.
Our technical contribution are two-fold: 1) a near-optimal design scheme to explore
arXiv Detail & Related papers (2022-10-15T09:22:22Z) - Reward-Mixing MDPs with a Few Latent Contexts are Learnable [75.17357040707347]
We consider episodic reinforcement learning in reward-mixing Markov decision processes (RMMDPs)
Our goal is to learn a near-optimal policy that nearly maximizes the $H$ time-step cumulative rewards in such a model.
arXiv Detail & Related papers (2022-10-05T22:52:00Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Reward-Free Model-Based Reinforcement Learning with Linear Function
Approximation [92.99933928528797]
We study the model-based reward-free reinforcement learning with linear function approximation for episodic Markov decision processes (MDPs)
In the planning phase, the agent is given a specific reward function and uses samples collected from the exploration phase to learn a good policy.
We show that to obtain an $epsilon$-optimal policy for arbitrary reward function, UCRL-RFE needs to sample at most $tilde O(H4d(H + d)epsilon-2)$ episodes.
arXiv Detail & Related papers (2021-10-12T23:03:58Z) - Gap-Dependent Unsupervised Exploration for Reinforcement Learning [40.990467706237396]
We present an efficient algorithm for task-agnostic reinforcement learning.
The algorithm takes only $widetildemathcalO (1/epsilon cdot (H3SA / rho + H4 S2 A) )$ episodes of exploration.
We show that, information-theoretically, this bound is nearly tight for $rho Theta (1/(HS))$ and $H>1$.
arXiv Detail & Related papers (2021-08-11T20:42:46Z) - Reward-Free Exploration for Reinforcement Learning [82.3300753751066]
We propose a new "reward-free RL" framework to isolate the challenges of exploration.
We give an efficient algorithm that conducts $tildemathcalO(S2Amathrmpoly(H)/epsilon2)$ episodes of exploration.
We also give a nearly-matching $Omega(S2AH2/epsilon2)$ lower bound, demonstrating the near-optimality of our algorithm in this setting.
arXiv Detail & Related papers (2020-02-07T14:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.