Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted
Markov Decision Processes
- URL: http://arxiv.org/abs/2302.11381v3
- Date: Tue, 21 Nov 2023 20:17:40 GMT
- Title: Optimal Convergence Rate for Exact Policy Mirror Descent in Discounted
Markov Decision Processes
- Authors: Emmeran Johnson, Ciara Pike-Burke, Patrick Rebeschini
- Abstract summary: Policy Mirror Descent is a family of algorithms that covers a wide range of novel and fundamental methods in reinforcement learning.
Motivated by the instability of policy iteration with inexact policy evaluation, PMD algorithmically regularises the policy improvement step of PI.
We show that the dimension-free $gamma$-rate of PI can be achieved by the general family of unregularised PMD algorithms under an adaptive step-size.
- Score: 18.35462792871242
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Policy Mirror Descent (PMD) is a general family of algorithms that covers a
wide range of novel and fundamental methods in reinforcement learning.
Motivated by the instability of policy iteration (PI) with inexact policy
evaluation, PMD algorithmically regularises the policy improvement step of PI.
With exact policy evaluation, PI is known to converge linearly with a rate
given by the discount factor $\gamma$ of a Markov Decision Process. In this
work, we bridge the gap between PI and PMD with exact policy evaluation and
show that the dimension-free $\gamma$-rate of PI can be achieved by the general
family of unregularised PMD algorithms under an adaptive step-size. We show
that both the rate and step-size are unimprovable for PMD: we provide matching
lower bounds that demonstrate that the $\gamma$-rate is optimal for PMD methods
as well as PI, and that the adaptive step-size is necessary for PMD to achieve
it. Our work is the first to relate PMD to rate-optimality and step-size
necessity. Our study of the convergence of PMD avoids the use of the
performance difference lemma, which leads to a direct analysis of independent
interest. We also extend the analysis to the inexact setting and establish the
first dimension-optimal sample complexity for unregularised PMD under a
generative model, improving upon the best-known result.
Related papers
- Efficient Learning of POMDPs with Known Observation Model in Average-Reward Setting [56.92178753201331]
We propose the Observation-Aware Spectral (OAS) estimation technique, which enables the POMDP parameters to be learned from samples collected using a belief-based policy.
We show the consistency of the OAS procedure, and we prove a regret guarantee of order $mathcalO(sqrtT log(T)$ for the proposed OAS-UCRL algorithm.
arXiv Detail & Related papers (2024-10-02T08:46:34Z) - Deterministic Policy Gradient Primal-Dual Methods for Continuous-Space Constrained MDPs [82.34567890576423]
We develop a deterministic policy gradient primal-dual method to find an optimal deterministic policy with non-asymptotic convergence.
We prove that the primal-dual iterates of D-PGPD converge at a sub-linear rate to an optimal regularized primal-dual pair.
To the best of our knowledge, this appears to be the first work that proposes a deterministic policy search method for continuous-space constrained MDPs.
arXiv Detail & Related papers (2024-08-19T14:11:04Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Policy Mirror Descent with Lookahead [0.46040036610482665]
Policy Mirror Descent (PMD) can be seen as a soft Policy It algorithm implementing regularized 1-step greedy policy improvement.
We propose a new class of PMD algorithms called $h$-PMD which incorporates multi-step greedy policy improvement.
We show that $h$-PMD enjoys a faster dimension-free $gammah$-linear convergence rate, contingent on the computation of multi-step greedy policies.
arXiv Detail & Related papers (2024-03-21T06:10:51Z) - On the Global Convergence of Policy Gradient in Average Reward Markov
Decision Processes [50.68789924454235]
We present the first finite time global convergence analysis of policy gradient in the context of average reward Markov decision processes (MDPs)
Our analysis shows that the policy gradient iterates converge to the optimal policy at a sublinear rate of $Oleft(frac1Tright),$ which translates to $Oleft(log(T)right)$ regret, where $T$ represents the number of iterations.
arXiv Detail & Related papers (2024-03-11T15:25:03Z) - Last-Iterate Convergent Policy Gradient Primal-Dual Methods for
Constrained MDPs [107.28031292946774]
We study the problem of computing an optimal policy of an infinite-horizon discounted Markov decision process (constrained MDP)
We develop two single-time-scale policy-based primal-dual algorithms with non-asymptotic convergence of their policy iterates to an optimal constrained policy.
To the best of our knowledge, this work appears to be the first non-asymptotic policy last-iterate convergence result for single-time-scale algorithms in constrained MDPs.
arXiv Detail & Related papers (2023-06-20T17:27:31Z) - A Theoretical Analysis of Optimistic Proximal Policy Optimization in
Linear Markov Decision Processes [13.466249082564213]
We propose an optimistic variant of PPO for episodic adversarial linear MDPs with full-information feedback.
Compared with existing policy-based algorithms, we achieve the state-of-the-art regret bound in both linear MDPs and adversarial linear MDPs with full information.
arXiv Detail & Related papers (2023-05-15T17:55:24Z) - First-order Policy Optimization for Robust Markov Decision Process [40.2022466644885]
We consider the problem of solving robust Markov decision process (MDP)
MDP involves a set of discounted, finite state, finite action space MDPs with uncertain transition kernels.
For $(mathbfs,mathbfa)$-rectangular uncertainty sets, we establish several structural observations on the robust objective.
arXiv Detail & Related papers (2022-09-21T18:10:28Z) - Efficient Policy Iteration for Robust Markov Decision Processes via
Regularization [49.05403412954533]
Robust decision processes (MDPs) provide a framework to model decision problems where the system dynamics are changing or only partially known.
Recent work established the equivalence between texttts rectangular $L_p$ robust MDPs and regularized MDPs, and derived a regularized policy iteration scheme that enjoys the same level of efficiency as standard MDPs.
In this work, we focus on the policy improvement step and derive concrete forms for the greedy policy and the optimal robust Bellman operators.
arXiv Detail & Related papers (2022-05-28T04:05:20Z) - Stochastic first-order methods for average-reward Markov decision processes [10.023632561462712]
We study average-reward Markov decision processes (AMDPs) and develop novel first-order methods with strong theoretical guarantees for both policy optimization and policy evaluation.
By combining the policy evaluation and policy optimization parts, we establish sample complexity results for solving AMDPs under both generative and Markovian noise models.
arXiv Detail & Related papers (2022-05-11T23:02:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.