First-Order Regret in Reinforcement Learning with Linear Function
Approximation: A Robust Estimation Approach
- URL: http://arxiv.org/abs/2112.03432v1
- Date: Tue, 7 Dec 2021 00:29:57 GMT
- Title: First-Order Regret in Reinforcement Learning with Linear Function
Approximation: A Robust Estimation Approach
- Authors: Andrew Wagenmaker, Yifang Chen, Max Simchowitz, Simon S. Du, Kevin
Jamieson
- Abstract summary: We show that it is possible to obtain regret scaling as $mathcalO(sqrtV_1star K)$ in reinforcement learning with large state spaces.
We demonstrate that existing techniques based on at least squares estimation are insufficient to obtain this result.
- Score: 57.570201404222935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Obtaining first-order regret bounds -- regret bounds scaling not as the
worst-case but with some measure of the performance of the optimal policy on a
given instance -- is a core question in sequential decision-making. While such
bounds exist in many settings, they have proven elusive in reinforcement
learning with large state spaces. In this work we address this gap, and show
that it is possible to obtain regret scaling as $\mathcal{O}(\sqrt{V_1^\star
K})$ in reinforcement learning with large state spaces, namely the linear MDP
setting. Here $V_1^\star$ is the value of the optimal policy and $K$ is the
number of episodes. We demonstrate that existing techniques based on least
squares estimation are insufficient to obtain this result, and instead develop
a novel robust self-normalized concentration bound based on the robust Catoni
mean estimator, which may be of independent interest.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.