Softmax Policy Gradient Methods Can Take Exponential Time to Converge
- URL: http://arxiv.org/abs/2102.11270v1
- Date: Mon, 22 Feb 2021 18:56:26 GMT
- Title: Softmax Policy Gradient Methods Can Take Exponential Time to Converge
- Authors: Gen Li and Yuting Wei and Yuejie Chi and Yuantao Gu and Yuxin Chen
- Abstract summary: The softmax policy gradient (PG) method is arguably one of the de facto implementations of policy optimization in modern reinforcement learning.
We demonstrate that softmax PG methods can take exponential time -- in terms of $mathcalS|$ and $frac11-gamma$ -- to converge.
- Score: 60.98700344526674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The softmax policy gradient (PG) method, which performs gradient ascent under
softmax policy parameterization, is arguably one of the de facto
implementations of policy optimization in modern reinforcement learning. For
$\gamma$-discounted infinite-horizon tabular Markov decision processes (MDPs),
remarkable progress has recently been achieved towards establishing global
convergence of softmax PG methods in finding a near-optimal policy. However,
prior results fall short of delineating clear dependencies of convergence rates
on salient parameters such as the cardinality of the state space $\mathcal{S}$
and the effective horizon $\frac{1}{1-\gamma}$, both of which could be
excessively large. In this paper, we deliver a pessimistic message regarding
the iteration complexity of softmax PG methods, despite assuming access to
exact gradient computation. Specifically, we demonstrate that softmax PG
methods can take exponential time -- in terms of $|\mathcal{S}|$ and
$\frac{1}{1-\gamma}$ -- to converge, even in the presence of a benign policy
initialization and an initial state distribution amenable to exploration. This
is accomplished by characterizing the algorithmic dynamics over a
carefully-constructed MDP containing only three actions. Our exponential lower
bound hints at the necessity of carefully adjusting update rules or enforcing
proper regularization in accelerating PG methods.
Related papers
- Structure Matters: Dynamic Policy Gradient [1.747623282473278]
We introduce a framework called dynamic policy gradient (DynPG)
DynPG directly integrates dynamic programming with (any) policy gradient method.
Our findings contrast recent lower bound examples for vanilla policy gradient.
arXiv Detail & Related papers (2024-11-07T17:51:55Z) - Near-Optimal Policy Identification in Robust Constrained Markov Decision Processes via Epigraph Form [26.01796404477275]
This paper presents the first algorithm capable of identifying a near-optimal policy in a robust constrained MDP (RCMDP)
An optimal policy minimizes cumulative cost while satisfying constraints in the worst-case scenario across a set of environments.
arXiv Detail & Related papers (2024-08-29T06:37:16Z) - Deterministic Policy Gradient Primal-Dual Methods for Continuous-Space Constrained MDPs [82.34567890576423]
We develop a deterministic policy gradient primal-dual method to find an optimal deterministic policy with non-asymptotic convergence.
We prove that the primal-dual iterates of D-PGPD converge at a sub-linear rate to an optimal regularized primal-dual pair.
To the best of our knowledge, this appears to be the first work that proposes a deterministic policy search method for continuous-space constrained MDPs.
arXiv Detail & Related papers (2024-08-19T14:11:04Z) - Confident Natural Policy Gradient for Local Planning in $q_π$-realizable Constrained MDPs [44.69257217086967]
The constrained Markov decision process (CMDP) framework emerges as an important reinforcement learning approach for imposing safety or other critical objectives.
In this paper, we address the learning problem given linear function approximation with $q_pi$-realizability.
arXiv Detail & Related papers (2024-06-26T17:57:13Z) - On the Global Convergence of Policy Gradient in Average Reward Markov
Decision Processes [50.68789924454235]
We present the first finite time global convergence analysis of policy gradient in the context of average reward Markov decision processes (MDPs)
Our analysis shows that the policy gradient iterates converge to the optimal policy at a sublinear rate of $Oleft(frac1Tright),$ which translates to $Oleft(log(T)right)$ regret, where $T$ represents the number of iterations.
arXiv Detail & Related papers (2024-03-11T15:25:03Z) - Last-Iterate Convergent Policy Gradient Primal-Dual Methods for
Constrained MDPs [107.28031292946774]
We study the problem of computing an optimal policy of an infinite-horizon discounted Markov decision process (constrained MDP)
We develop two single-time-scale policy-based primal-dual algorithms with non-asymptotic convergence of their policy iterates to an optimal constrained policy.
To the best of our knowledge, this work appears to be the first non-asymptotic policy last-iterate convergence result for single-time-scale algorithms in constrained MDPs.
arXiv Detail & Related papers (2023-06-20T17:27:31Z) - Linear Convergence of Natural Policy Gradient Methods with Log-Linear
Policies [115.86431674214282]
We consider infinite-horizon discounted Markov decision processes and study the convergence rates of the natural policy gradient (NPG) and the Q-NPG methods with the log-linear policy class.
We show that both methods attain linear convergence rates and $mathcalO (1/epsilon2)$ sample complexities using a simple, non-adaptive geometrically increasing step size.
arXiv Detail & Related papers (2022-10-04T06:17:52Z) - Homotopic Policy Mirror Descent: Policy Convergence, Implicit
Regularization, and Improved Sample Complexity [40.2022466644885]
homotopic policy mirror descent (HPMD) method for solving discounted, infinite horizon MDPs with finite state and action space.
We report three properties that seem to be new in the literature of policy gradient methods.
arXiv Detail & Related papers (2022-01-24T04:54:58Z) - Fast Global Convergence of Natural Policy Gradient Methods with Entropy
Regularization [44.24881971917951]
Natural policy gradient (NPG) methods are among the most widely used policy optimization algorithms.
We develop convergence guarantees for entropy-regularized NPG methods under softmax parameterization.
Our results accommodate a wide range of learning rates, and shed light upon the role of entropy regularization in enabling fast convergence.
arXiv Detail & Related papers (2020-07-13T17:58:41Z) - Provably Efficient Safe Exploration via Primal-Dual Policy Optimization [105.7510838453122]
We study the Safe Reinforcement Learning (SRL) problem using the Constrained Markov Decision Process (CMDP) formulation.
We present an provably efficient online policy optimization algorithm for CMDP with safe exploration in the function approximation setting.
arXiv Detail & Related papers (2020-03-01T17:47:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.