Sufficient Exploration for Convex Q-learning
- URL: http://arxiv.org/abs/2210.09409v1
- Date: Mon, 17 Oct 2022 20:22:12 GMT
- Title: Sufficient Exploration for Convex Q-learning
- Authors: Fan Lu, Prashant Mehta, Sean Meyn and Gergely Neu
- Abstract summary: This paper builds on the linear programming (LP) formulation of optimal control of Manne.
A primal version is called logistic Q-learning, and a dual variant is convex Q-learning.
It is shown that convex Q-learning is successful in cases where standard Q-learning diverges.
- Score: 10.75319149461189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years there has been a collective research effort to find new
formulations of reinforcement learning that are simultaneously more efficient
and more amenable to analysis. This paper concerns one approach that builds on
the linear programming (LP) formulation of optimal control of Manne. A primal
version is called logistic Q-learning, and a dual variant is convex Q-learning.
This paper focuses on the latter, while building bridges with the former. The
main contributions follow: (i) The dual of convex Q-learning is not precisely
Manne's LP or a version of logistic Q-learning, but has similar structure that
reveals the need for regularization to avoid over-fitting. (ii) A sufficient
condition is obtained for a bounded solution to the Q-learning LP. (iii)
Simulation studies reveal numerical challenges when addressing sampled-data
systems based on a continuous time model. The challenge is addressed using
state-dependent sampling. The theory is illustrated with applications to
examples from OpenAI gym. It is shown that convex Q-learning is successful in
cases where standard Q-learning diverges, such as the LQR problem.
Related papers
- Two-Step Q-Learning [0.0]
The paper proposes a novel off-policy two-step Q-learning algorithm, without importance sampling.
Numerical experiments demonstrate the superior performance of both the two-step Q-learning and its smooth variants.
arXiv Detail & Related papers (2024-07-02T15:39:00Z) - Regularized Q-learning through Robust Averaging [3.4354636842203026]
We propose a new Q-learning variant, called 2RA Q-learning, that addresses some weaknesses of existing Q-learning methods in a principled manner.
One such weakness is an underlying estimation bias which cannot be controlled and often results in poor performance.
We show that 2RA Q-learning converges to the optimal policy and analyze its theoretical mean-squared error.
arXiv Detail & Related papers (2024-05-03T15:57:26Z) - Unifying (Quantum) Statistical and Parametrized (Quantum) Algorithms [65.268245109828]
We take inspiration from Kearns' SQ oracle and Valiant's weak evaluation oracle.
We introduce an extensive yet intuitive framework that yields unconditional lower bounds for learning from evaluation queries.
arXiv Detail & Related papers (2023-10-26T18:23:21Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Online Target Q-learning with Reverse Experience Replay: Efficiently
finding the Optimal Policy for Linear MDPs [50.75812033462294]
We bridge the gap between practical success of Q-learning and pessimistic theoretical results.
We present novel methods Q-Rex and Q-RexDaRe.
We show that Q-Rex efficiently finds the optimal policy for linear MDPs.
arXiv Detail & Related papers (2021-10-16T01:47:41Z) - Finite-Time Analysis of Asynchronous Q-Learning with Discrete-Time
Switching System Models [6.85316573653194]
We prove that Q-learning with a constant step-size can be naturally formulated as discrete-time switched linear systems.
It offers novel and intuitive insights on Q-learning mainly based on controloretic frameworks.
arXiv Detail & Related papers (2021-02-17T05:32:07Z) - Analysis of Q-learning with Adaptation and Momentum Restart for Gradient
Descent [47.3692506462581]
We first characterize the convergence rate for Q-AMSGrad, which is the Q-learning algorithm with AMSGrad update.
To further improve the performance, we propose to incorporate the momentum restart scheme to Q-AMSGrad, resulting in the so-called Q-AMSGradR algorithm.
Our experiments on a linear quadratic regulator problem show that the two proposed Q-learning algorithms outperform the vanilla Q-learning with SGD updates.
arXiv Detail & Related papers (2020-07-15T01:11:43Z) - Harvesting and Refining Question-Answer Pairs for Unsupervised QA [95.9105154311491]
We introduce two approaches to improve unsupervised Question Answering (QA)
First, we harvest lexically and syntactically divergent questions from Wikipedia to automatically construct a corpus of question-answer pairs (named as RefQA)
Second, we take advantage of the QA model to extract more appropriate answers, which iteratively refines data over RefQA.
arXiv Detail & Related papers (2020-05-06T15:56:06Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.