Linear Convergence for Natural Policy Gradient with Log-linear Policy
Parametrization
- URL: http://arxiv.org/abs/2209.15382v1
- Date: Fri, 30 Sep 2022 11:17:44 GMT
- Title: Linear Convergence for Natural Policy Gradient with Log-linear Policy
Parametrization
- Authors: Carlo Alfano and Patrick Rebeschini
- Abstract summary: We analyze the convergence rate of the unregularized natural policy algorithm with log-linear policy parametrizations.
We show that the algorithm enjoys the same linear guarantees as in the deterministic case up to an error term.
- Score: 18.072051868187934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We analyze the convergence rate of the unregularized natural policy gradient
algorithm with log-linear policy parametrizations in infinite-horizon
discounted Markov decision processes. In the deterministic case, when the
Q-value is known and can be approximated by a linear combination of a known
feature function up to a bias error, we show that a geometrically-increasing
step size yields a linear convergence rate towards an optimal policy. We then
consider the sample-based case, when the best representation of the Q- value
function among linear combinations of a known feature function is known up to
an estimation error. In this setting, we show that the algorithm enjoys the
same linear guarantees as in the deterministic case up to an error term that
depends on the estimation error, the bias error, and the condition number of
the feature covariance matrix. Our results build upon the general framework of
policy mirror descent and extend previous findings for the softmax tabular
parametrization to the log-linear policy class.
Related papers
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
Temporal Difference (TD) learning, arguably the most widely used for policy evaluation, serves as a natural framework for this purpose.
In this paper, we study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation, and obtain three significant improvements over existing results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Deterministic Policy Gradient Primal-Dual Methods for Continuous-Space Constrained MDPs [82.34567890576423]
We develop a deterministic policy gradient primal-dual method to find an optimal deterministic policy with non-asymptotic convergence.
We prove that the primal-dual iterates of D-PGPD converge at a sub-linear rate to an optimal regularized primal-dual pair.
To the best of our knowledge, this appears to be the first work that proposes a deterministic policy search method for continuous-space constrained MDPs.
arXiv Detail & Related papers (2024-08-19T14:11:04Z) - Improved High-Probability Bounds for the Temporal Difference Learning Algorithm via Exponential Stability [17.771354881467435]
We show that a simple algorithm with a universal and instance-independent step size is sufficient to obtain near-optimal variance and bias terms.
Our proof technique is based on refined error bounds for linear approximation together with the novel stability result for the product of random matrices.
arXiv Detail & Related papers (2023-10-22T12:37:25Z) - High-probability sample complexities for policy evaluation with linear function approximation [88.87036653258977]
We investigate the sample complexities required to guarantee a predefined estimation error of the best linear coefficients for two widely-used policy evaluation algorithms.
We establish the first sample complexity bound with high-probability convergence guarantee that attains the optimal dependence on the tolerance level.
arXiv Detail & Related papers (2023-05-30T12:58:39Z) - Linear Convergence of Natural Policy Gradient Methods with Log-Linear
Policies [115.86431674214282]
We consider infinite-horizon discounted Markov decision processes and study the convergence rates of the natural policy gradient (NPG) and the Q-NPG methods with the log-linear policy class.
We show that both methods attain linear convergence rates and $mathcalO (1/epsilon2)$ sample complexities using a simple, non-adaptive geometrically increasing step size.
arXiv Detail & Related papers (2022-10-04T06:17:52Z) - Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs [21.347689976296834]
We employ the natural policy gradient method to solve the discounted optimal optimal rate problem.
We also provide convergence and finite-sample guarantees for two sample-based NPG-PD algorithms.
arXiv Detail & Related papers (2022-06-06T04:28:04Z) - Quasi-Newton Iteration in Deterministic Policy Gradient [0.0]
We show that the approximate Hessian converges to the exact Hessian at the optimal policy.
We analytically verify the formulation in a simple linear case and compare the convergence of the proposed method with the natural policy gradient.
arXiv Detail & Related papers (2022-03-25T18:38:57Z) - On the Convergence Rates of Policy Gradient Methods [9.74841674275568]
We consider geometrically discounted dominance problems with finite state sub spaces.
We show that with direct gradient pararization in a sample we can analyze the general complexity of a gradient.
arXiv Detail & Related papers (2022-01-19T07:03:37Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.