An Elementary Proof that Q-learning Converges Almost Surely
- URL: http://arxiv.org/abs/2108.02827v1
- Date: Thu, 5 Aug 2021 19:32:26 GMT
- Title: An Elementary Proof that Q-learning Converges Almost Surely
- Authors: Matthew T. Regehr, Alex Ayoub
- Abstract summary: Watkins' and Dayan's Q-learning is a model-free reinforcement learning algorithm.
This paper reproduces a precise and (nearly) self-contained proof that Q-learning converges.
- Score: 1.52292571922932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Watkins' and Dayan's Q-learning is a model-free reinforcement learning
algorithm that iteratively refines an estimate for the optimal action-value
function of an MDP by stochastically "visiting" many state-ation pairs [Watkins
and Dayan, 1992]. Variants of the algorithm lie at the heart of numerous recent
state-of-the-art achievements in reinforcement learning, including the
superhuman Atari-playing deep Q-network [Mnih et al., 2015]. The goal of this
paper is to reproduce a precise and (nearly) self-contained proof that
Q-learning converges. Much of the available literature leverages powerful
theory to obtain highly generalizable results in this vein. However, this
approach requires the reader to be familiar with and make many deep connections
to different research areas. A student seeking to deepen their understand of
Q-learning risks becoming caught in a vicious cycle of "RL-learning Hell". For
this reason, we give a complete proof from start to finish using only one
external result from the field of stochastic approximation, despite the fact
that this minimal dependence on other results comes at the expense of some
"shininess".
Related papers
- Lifting the Veil: Unlocking the Power of Depth in Q-learning [31.700583180829106]
deep Q-learning has been widely used in operations research and management science.
This paper theoretically verifies the power of depth in deep Q-learning.
arXiv Detail & Related papers (2023-10-27T06:15:33Z) - Understanding, Predicting and Better Resolving Q-Value Divergence in
Offline-RL [86.0987896274354]
We first identify a fundamental pattern, self-excitation, as the primary cause of Q-value estimation divergence in offline RL.
We then propose a novel Self-Excite Eigenvalue Measure (SEEM) metric to measure the evolving property of Q-network at training.
For the first time, our theory can reliably decide whether the training will diverge at an early stage.
arXiv Detail & Related papers (2023-10-06T17:57:44Z) - An Analysis of Quantile Temporal-Difference Learning [53.36758478669685]
quantile temporal-difference learning (QTD) has proven to be a key component in several successful large-scale applications of reinforcement learning.
Unlike classical TD learning, QTD updates do not approximate contraction mappings, are highly non-linear, and may have multiple fixed points.
This paper is a proof of convergence to the fixed points of a related family of dynamic programming procedures with probability 1.
arXiv Detail & Related papers (2023-01-11T13:41:56Z) - Sufficient Exploration for Convex Q-learning [10.75319149461189]
This paper builds on the linear programming (LP) formulation of optimal control of Manne.
A primal version is called logistic Q-learning, and a dual variant is convex Q-learning.
It is shown that convex Q-learning is successful in cases where standard Q-learning diverges.
arXiv Detail & Related papers (2022-10-17T20:22:12Z) - Convergence Results For Q-Learning With Experience Replay [51.11953997546418]
We provide a convergence rate guarantee, and discuss how it compares to the convergence of Q-learning depending on important parameters such as the frequency and number of iterations of replay.
We also provide theoretical evidence showing when we might expect this to strictly improve performance, by introducing and analyzing a simple class of MDPs.
arXiv Detail & Related papers (2021-12-08T10:22:49Z) - Online Target Q-learning with Reverse Experience Replay: Efficiently
finding the Optimal Policy for Linear MDPs [50.75812033462294]
We bridge the gap between practical success of Q-learning and pessimistic theoretical results.
We present novel methods Q-Rex and Q-RexDaRe.
We show that Q-Rex efficiently finds the optimal policy for linear MDPs.
arXiv Detail & Related papers (2021-10-16T01:47:41Z) - Tightening the Dependence on Horizon in the Sample Complexity of
Q-Learning [59.71676469100807]
This work sharpens the sample complexity of synchronous Q-learning to an order of $frac|mathcalS|| (1-gamma)4varepsilon2$ for any $0varepsilon 1$.
Our finding unveils the effectiveness of vanilla Q-learning, which matches that of speedy Q-learning without requiring extra computation and storage.
arXiv Detail & Related papers (2021-02-12T14:22:05Z) - Self-correcting Q-Learning [14.178899938667161]
We introduce a new way to address the bias in the form of a "self-correcting algorithm"
Applying this strategy to Q-learning results in Self-correcting Q-learning.
We show theoretically that this new algorithm enjoys the same convergence guarantees as Q-learning while being more accurate.
arXiv Detail & Related papers (2020-12-02T11:36:24Z) - Finite-Time Analysis for Double Q-learning [50.50058000948908]
We provide the first non-asymptotic, finite-time analysis for double Q-learning.
We show that both synchronous and asynchronous double Q-learning are guaranteed to converge to an $epsilon$-accurate neighborhood of the global optimum.
arXiv Detail & Related papers (2020-09-29T18:48:21Z) - Deep Q-Learning: Theoretical Insights from an Asymptotic Analysis [3.9871041399267613]
Deep Q-Learning is an important reinforcement learning algorithm, which involves training a deep neural network to approximate the well-known Q-function.
Although wildly successful under laboratory conditions, serious gaps between theory and practice as well as a lack of formal guarantees prevent its use in the real world.
We provide a theoretical analysis of a popular version of Deep Q-Learning under realistic verifiable assumptions.
arXiv Detail & Related papers (2020-08-25T07:59:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.