Stochastic first-order methods for average-reward Markov decision
processes
- URL: http://arxiv.org/abs/2205.05800v1
- Date: Wed, 11 May 2022 23:02:46 GMT
- Title: Stochastic first-order methods for average-reward Markov decision
processes
- Authors: Tianjiao Li, Feiyang Wu and Guanghui Lan
- Abstract summary: We study the problem of average-reward Markov decision processes (AMDPs)
We develop novel first-order methods with strong theoretical guarantees for both policy evaluation and optimization.
- Score: 10.483316336206903
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of average-reward Markov decision processes (AMDPs) and
develop novel first-order methods with strong theoretical guarantees for both
policy evaluation and optimization. Existing on-policy evaluation methods
suffer from sub-optimal convergence rates as well as failure in handling
insufficiently random policies, e.g., deterministic policies, for lack of
exploration. To remedy these issues, we develop a novel variance-reduced
temporal difference (VRTD) method with linear function approximation for
randomized policies along with optimal convergence guarantees, and an
exploratory variance-reduced temporal difference (EVRTD) method for
insufficiently random policies with comparable convergence guarantees. We
further establish linear convergence rate on the bias of policy evaluation,
which is essential for improving the overall sample complexity of policy
optimization. On the other hand, compared with intensive research interest in
finite sample analysis of policy gradient methods for discounted MDPs, existing
studies on policy gradient methods for AMDPs mostly focus on regret bounds
under restrictive assumptions on the underlying Markov processes (see, e.g.,
Abbasi-Yadkori et al., 2019), and they often lack guarantees on the overall
sample complexities. Towards this end, we develop an average-reward variant of
the stochastic policy mirror descent (SPMD) (Lan, 2022). We establish the first
$\widetilde{\mathcal{O}}(\epsilon^{-2})$ sample complexity for solving AMDPs
with policy gradient method under both the generative model (with unichain
assumption) and Markovian noise model (with ergodic assumption). This bound can
be further improved to $\widetilde{\mathcal{O}}(\epsilon^{-1})$ for solving
regularized AMDPs. Our theoretical advantages are corroborated by numerical
experiments.
Related papers
- Model-Based Epistemic Variance of Values for Risk-Aware Policy
Optimization [63.32053223422317]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
In particular, we focus on characterizing the variance over values induced by a distribution over MDPs.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - A safe exploration approach to constrained Markov decision processes [7.036452261968767]
We consider discounted infinite horizon constrained Markov decision processes (CMDPs)
The goal is to find an optimal policy that maximizes the expected cumulative reward subject to expected cumulative constraints.
Motivated by the application of CMDPs in online learning of safety-critical systems, we focus on developing a model-free and simulator-free algorithm.
arXiv Detail & Related papers (2023-12-01T13:16:39Z) - Last-Iterate Convergent Policy Gradient Primal-Dual Methods for
Constrained MDPs [107.28031292946774]
We study the problem of computing an optimal policy of an infinite-horizon discounted Markov decision process (constrained MDP)
We develop two single-time-scale policy-based primal-dual algorithms with non-asymptotic convergence of their policy iterates to an optimal constrained policy.
To the best of our knowledge, this work appears to be the first non-asymptotic policy last-iterate convergence result for single-time-scale algorithms in constrained MDPs.
arXiv Detail & Related papers (2023-06-20T17:27:31Z) - High-probability sample complexities for policy evaluation with linear function approximation [88.87036653258977]
We investigate the sample complexities required to guarantee a predefined estimation error of the best linear coefficients for two widely-used policy evaluation algorithms.
We establish the first sample complexity bound with high-probability convergence guarantee that attains the optimal dependence on the tolerance level.
arXiv Detail & Related papers (2023-05-30T12:58:39Z) - First-order Policy Optimization for Robust Markov Decision Process [40.2022466644885]
We consider the problem of solving robust Markov decision process (MDP)
MDP involves a set of discounted, finite state, finite action space MDPs with uncertain transition kernels.
For $(mathbfs,mathbfa)$-rectangular uncertainty sets, we establish several structural observations on the robust objective.
arXiv Detail & Related papers (2022-09-21T18:10:28Z) - Sample Complexity of Policy-Based Methods under Off-Policy Sampling and
Linear Function Approximation [8.465228064780748]
off-policy sampling and linear function approximation are employed for policy evaluation.
Various policy update rules, including natural policy gradient (NPG), are considered for policy update.
We establish for the first time an overall $mathcalO(epsilon-2)$ sample complexity for finding an optimal policy.
arXiv Detail & Related papers (2022-08-05T15:59:05Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - On the Convergence and Sample Efficiency of Variance-Reduced Policy
Gradient Method [38.34416337932712]
Policy gives rise to a rich class of reinforcement learning (RL) methods, for example the REINFORCE.
Yet the best known sample complexity result for such methods to find an $epsilon$-optimal policy is $mathcalO(epsilon-3)$, which is suboptimal.
We study the fundamental convergence properties and sample efficiency of first-order policy optimization method.
arXiv Detail & Related papers (2021-02-17T07:06:19Z) - Variance-Reduced Off-Policy Memory-Efficient Policy Search [61.23789485979057]
Off-policy policy optimization is a challenging problem in reinforcement learning.
Off-policy algorithms are memory-efficient and capable of learning from off-policy samples.
arXiv Detail & Related papers (2020-09-14T16:22:46Z) - Fast Global Convergence of Natural Policy Gradient Methods with Entropy
Regularization [44.24881971917951]
Natural policy gradient (NPG) methods are among the most widely used policy optimization algorithms.
We develop convergence guarantees for entropy-regularized NPG methods under softmax parameterization.
Our results accommodate a wide range of learning rates, and shed light upon the role of entropy regularization in enabling fast convergence.
arXiv Detail & Related papers (2020-07-13T17:58:41Z) - Is Temporal Difference Learning Optimal? An Instance-Dependent Analysis [102.29671176698373]
We address the problem of policy evaluation in discounted decision processes, and provide Markov-dependent guarantees on the $ell_infty$error under a generative model.
We establish both and non-asymptotic versions of local minimax lower bounds for policy evaluation, thereby providing an instance-dependent baseline by which to compare algorithms.
arXiv Detail & Related papers (2020-03-16T17:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.