Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning
- URL: http://arxiv.org/abs/2402.10810v1
- Date: Fri, 16 Feb 2024 16:35:18 GMT
- Title: Double Duality: Variational Primal-Dual Policy Optimization for
Constrained Reinforcement Learning
- Authors: Zihao Li, Boyi Liu, Zhuoran Yang, Zhaoran Wang, Mengdi Wang
- Abstract summary: We study the Constrained Convex Decision Process (MDP), where the goal is to minimize a convex functional of the visitation measure.
Design algorithms for a constrained convex MDP faces several challenges, including handling the large state space.
- Score: 132.7040981721302
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We study the Constrained Convex Markov Decision Process (MDP), where the goal
is to minimize a convex functional of the visitation measure, subject to a
convex constraint. Designing algorithms for a constrained convex MDP faces
several challenges, including (1) handling the large state space, (2) managing
the exploration/exploitation tradeoff, and (3) solving the constrained
optimization where the objective and the constraint are both nonlinear
functions of the visitation measure. In this work, we present a model-based
algorithm, Variational Primal-Dual Policy Optimization (VPDPO), in which
Lagrangian and Fenchel duality are implemented to reformulate the original
constrained problem into an unconstrained primal-dual optimization. Moreover,
the primal variables are updated by model-based value iteration following the
principle of Optimism in the Face of Uncertainty (OFU), while the dual
variables are updated by gradient ascent. Moreover, by embedding the visitation
measure into a finite-dimensional space, we can handle large state spaces by
incorporating function approximation. Two notable examples are (1) Kernelized
Nonlinear Regulators and (2) Low-rank MDPs. We prove that with an optimistic
planning oracle, our algorithm achieves sublinear regret and constraint
violation in both cases and can attain the globally optimal policy of the
original constrained problem.
Related papers
- Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models [57.52124921268249]
We propose a Trust Sequential Quadratic Programming method to find both first and second-order stationary points.
To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a approximation of the objective subject.
To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature the reduced Hessian matrix.
arXiv Detail & Related papers (2024-09-24T04:39:47Z) - Deterministic Policy Gradient Primal-Dual Methods for Continuous-Space Constrained MDPs [82.34567890576423]
We develop a deterministic policy gradient primal-dual method to find an optimal deterministic policy with non-asymptotic convergence.
We prove that the primal-dual iterates of D-PGPD converge at a sub-linear rate to an optimal regularized primal-dual pair.
To the best of our knowledge, this appears to be the first work that proposes a deterministic policy search method for continuous-space constrained MDPs.
arXiv Detail & Related papers (2024-08-19T14:11:04Z) - Learning Constrained Optimization with Deep Augmented Lagrangian Methods [54.22290715244502]
A machine learning (ML) model is trained to emulate a constrained optimization solver.
This paper proposes an alternative approach, in which the ML model is trained to predict dual solution estimates directly.
It enables an end-to-end training scheme is which the dual objective is as a loss function, and solution estimates toward primal feasibility, emulating a Dual Ascent method.
arXiv Detail & Related papers (2024-03-06T04:43:22Z) - Primal-dual regression approach for Markov decision processes with
general state and action space [0.30458514384586394]
We develop a regression based primal-dual martingale approach for solving finite time MDPs with general state and action space.
As a result, our method allows for the construction of tight upper and lower biased approximations of the value functions, and, provides tight approximations to the optimal policy.
arXiv Detail & Related papers (2022-10-01T11:48:22Z) - Convergence and sample complexity of natural policy gradient primal-dual methods for constrained MDPs [21.347689976296834]
We employ the natural policy gradient method to solve the discounted optimal optimal rate problem.
We also provide convergence and finite-sample guarantees for two sample-based NPG-PD algorithms.
arXiv Detail & Related papers (2022-06-06T04:28:04Z) - Algorithm for Constrained Markov Decision Process with Linear
Convergence [55.41644538483948]
An agent aims to maximize the expected accumulated discounted reward subject to multiple constraints on its costs.
A new dual approach is proposed with the integration of two ingredients: entropy regularized policy and Vaidya's dual.
The proposed approach is shown to converge (with linear rate) to the global optimum.
arXiv Detail & Related papers (2022-06-03T16:26:38Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - A Dual Approach to Constrained Markov Decision Processes with Entropy
Regularization [7.483040617090451]
We study entropy-regularized constrained Markov decision processes (CMDPs) under the soft-max parameterization.
Our theoretical analysis shows that its Lagrangian dual function is smooth and the Lagrangian duality gap can be decomposed into the primality gap and the constraint violation.
arXiv Detail & Related papers (2021-10-17T21:26:40Z) - A Stochastic Composite Augmented Lagrangian Method For Reinforcement
Learning [9.204659134755795]
We consider the linear programming (LP) formulation for deep reinforcement learning.
The augmented Lagrangian method suffers the double-sampling obstacle in solving the LP.
A deep parameterized augment Lagrangian method is proposed.
arXiv Detail & Related papers (2021-05-20T13:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.