Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost
- URL: http://arxiv.org/abs/2202.06385v1
- Date: Sun, 13 Feb 2022 19:01:06 GMT
- Title: Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost
- Authors: Dan Qiao, Ming Yin, Ming Min, Yu-Xiang Wang
- Abstract summary: We propose a new algorithm that achieves a regret of $widetildeO(sqrtH4S2AT)$ while requiring a switching cost of $O(HSA loglog T)$.
As a byproduct, our new algorithmic techniques allow us to derive a emphreward-free exploration algorithm with an optimal switching cost of $O(HSA)$.
- Score: 31.04961854943877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of reinforcement learning (RL) with low (policy)
switching cost - a problem well-motivated by real-life RL applications in which
deployments of new policies are costly and the number of policy updates must be
low. In this paper, we propose a new algorithm based on stage-wise exploration
and adaptive policy elimination that achieves a regret of
$\widetilde{O}(\sqrt{H^4S^2AT})$ while requiring a switching cost of $O(HSA
\log\log T)$. This is an exponential improvement over the best-known switching
cost $O(H^2SA\log T)$ among existing methods with
$\widetilde{O}(\mathrm{poly}(H,S,A)\sqrt{T})$ regret. In the above, $S,A$
denotes the number of states and actions in an $H$-horizon episodic Markov
Decision Process model with unknown transitions, and $T$ is the number of
steps. We also prove an information-theoretical lower bound which says that a
switching cost of $\Omega(HSA)$ is required for any no-regret algorithm. As a
byproduct, our new algorithmic techniques allow us to derive a
\emph{reward-free} exploration algorithm with an optimal switching cost of
$O(HSA)$.
Related papers
- Near-Optimal Reinforcement Learning with Self-Play under Adaptivity
Constraints [21.412085244757336]
We study the problem of multi-agent reinforcement learning (MARL) with adaptivity constraints.
For two-player zero-sum Markov Games, we design a (policy) elimination based algorithm that achieves a regret of $widetildeO(sqrtH3 S2 ABK)$.
We prove a batch complexity lower bound $Omega(fracHlog_AK+loglog K)$ for all algorithms with $widetildeO(sqrtK)$
arXiv Detail & Related papers (2024-02-02T03:00:40Z) - Near-Optimal Adversarial Reinforcement Learning with Switching Costs [43.895798638743784]
We show how to develop a provably efficient algorithm for adversarial RL with switching costs.
Our lower bound indicates that, due to the fundamental challenge of switching costs in adversarial RL, the best achieved regret is no longer achievable.
We propose two novel switching-reduced algorithms with regrets that match our lower bound when the transition function is known.
arXiv Detail & Related papers (2023-02-08T23:41:29Z) - Horizon-Free and Variance-Dependent Reinforcement Learning for Latent
Markov Decision Processes [62.90204655228324]
We study regret minimization for reinforcement learning (RL) in Latent Markov Decision Processes (LMDPs) with context in hindsight.
We design a novel model-based algorithmic framework which can be instantiated with both a model-optimistic and a value-optimistic solver.
arXiv Detail & Related papers (2022-10-20T21:32:01Z) - Near-Optimal Regret Bounds for Multi-batch Reinforcement Learning [54.806166861456035]
We study the episodic reinforcement learning (RL) problem modeled by finite-horizon Markov Decision Processes (MDPs) with constraint on the number of batches.
We design a computational efficient algorithm to achieve near-optimal regret of $tildeO(sqrtSAH3Kln (1/delta))$tildeO(cdot) hides logarithmic terms of $(S,A,H,K)$ in $K$ episodes.
Our technical contribution are two-fold: 1) a near-optimal design scheme to explore
arXiv Detail & Related papers (2022-10-15T09:22:22Z) - Logarithmic Regret from Sublinear Hints [76.87432703516942]
We show that an algorithm can obtain $O(log T)$ regret with just $O(sqrtT)$ hints under a natural query model.
We also show that $o(sqrtT)$ hints cannot guarantee better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2021-11-09T16:50:18Z) - Minimax Regret for Stochastic Shortest Path [63.45407095296692]
We study the Shortest Path (SSP) problem in which an agent has to reach a goal state in minimum total expected cost.
We show that the minimax regret for this setting is $widetilde O(B_star sqrt|S| |A| K)$ where $B_star$ is a bound on the expected cost of the optimal policy from any state.
Our algorithm runs in-time per episode, and is based on a novel reduction to reinforcement learning in finite-horizon MDPs.
arXiv Detail & Related papers (2021-03-24T10:11:49Z) - Online Convex Optimization with Continuous Switching Constraint [78.25064451417082]
We introduce the problem of online convex optimization with continuous switching constraint.
We show that, for strongly convex functions, the regret bound can be improved to $O(log T)$ for $S=Omega(log T)$, and $O(minT/exp(S)+S,T)$ for $S=O(log T)$.
arXiv Detail & Related papers (2021-03-21T11:43:35Z) - A Provably Efficient Algorithm for Linear Markov Decision Process with
Low Switching Cost [53.968049198926444]
We present the first algorithm for linear MDP with a low switching cost.
Our algorithm achieves an $widetildeOleft(sqrtd3H4Kright)$ regret bound with a near-optimal $Oleft(d Hlog Kright)$ global switching cost.
arXiv Detail & Related papers (2021-01-02T18:41:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.