Optimistic Natural Policy Gradient: a Simple Efficient Policy
Optimization Framework for Online RL
- URL: http://arxiv.org/abs/2305.11032v2
- Date: Mon, 4 Dec 2023 02:24:19 GMT
- Title: Optimistic Natural Policy Gradient: a Simple Efficient Policy
Optimization Framework for Online RL
- Authors: Qinghua Liu, Gell\'ert Weisz, Andr\'as Gy\"orgy, Chi Jin, Csaba
Szepesv\'ari
- Abstract summary: This paper proposes a simple efficient policy optimization framework -- Optimistic NPG for online RL.
For $d$-dimensional linear MDPs, Optimistic NPG is computationally efficient, and learns an $varepsilon$-optimal policy within $tildeO(d2/varepsilon3)$ samples.
- Score: 23.957148537567146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While policy optimization algorithms have played an important role in recent
empirical success of Reinforcement Learning (RL), the existing theoretical
understanding of policy optimization remains rather limited -- they are either
restricted to tabular MDPs or suffer from highly suboptimal sample complexity,
especial in online RL where exploration is necessary. This paper proposes a
simple efficient policy optimization framework -- Optimistic NPG for online RL.
Optimistic NPG can be viewed as a simple combination of the classic natural
policy gradient (NPG) algorithm [Kakade, 2001] with optimistic policy
evaluation subroutines to encourage exploration. For $d$-dimensional linear
MDPs, Optimistic NPG is computationally efficient, and learns an
$\varepsilon$-optimal policy within $\tilde{O}(d^2/\varepsilon^3)$ samples,
which is the first computationally efficient algorithm whose sample complexity
has the optimal dimension dependence $\tilde{\Theta}(d^2)$. It also improves
over state-of-the-art results of policy optimization algorithms [Zanette et
al., 2021] by a factor of $d$. In the realm of general function approximation,
which subsumes linear MDPs, Optimistic NPG, to our best knowledge, stands as
the first policy optimization algorithm that achieves polynomial sample
complexity for learning near-optimal policies.
Related papers
- Towards Efficient Exact Optimization of Language Model Alignment [93.39181634597877]
Direct preference optimization (DPO) was proposed to directly optimize the policy from preference data.
We show that DPO derived based on the optimal solution of problem leads to a compromised mean-seeking approximation of the optimal solution in practice.
We propose efficient exact optimization (EXO) of the alignment objective.
arXiv Detail & Related papers (2024-02-01T18:51:54Z) - Low-Switching Policy Gradient with Exploration via Online Sensitivity
Sampling [23.989009116398208]
We design a low-switching sample-efficient policy optimization algorithm, LPO, with general non-linear function approximation.
We show that, our algorithm obtains an $varepsilon$-optimal policy with only $widetildeO(fractextpoly(d)varepsilon3)$ samples.
arXiv Detail & Related papers (2023-06-15T23:51:46Z) - Policy Gradient Algorithms Implicitly Optimize by Continuation [7.351769270728942]
We argue that exploration in policy-gradient algorithms consists in a continuation of the return of the policy at hand, and that policies should be history-dependent rather than to maximize the return.
arXiv Detail & Related papers (2023-05-11T14:50:20Z) - Understanding the Effect of Stochasticity in Policy Optimization [86.7574122154668]
We show that the preferability of optimization methods depends critically on whether exact gradients are used.
Second, to explain these findings we introduce the concept of committal rate for policy optimization.
Third, we show that in the absence of external oracle information, there is an inherent trade-off between exploiting geometry to accelerate convergence versus achieving optimality almost surely.
arXiv Detail & Related papers (2021-10-29T06:35:44Z) - Optimistic Policy Optimization is Provably Efficient in Non-stationary
MDPs [45.6318149525364]
We study episodic reinforcement learning (RL) in non-stationary linear kernel Markov decision processes (MDPs)
We propose the $underlinetextp$eriodically $underlinetextr$estarted $underlinetexto$ptimistic $underlinetextp$olicy $underlinetexto$ptimization algorithm (PROPO)
arXiv Detail & Related papers (2021-10-18T02:33:20Z) - Bregman Gradient Policy Optimization [97.73041344738117]
We design a Bregman gradient policy optimization for reinforcement learning based on Bregman divergences and momentum techniques.
VR-BGPO reaches the best complexity $tilde(epsilon-3)$ for finding an $epsilon$stationary point only requiring one trajectory at each iteration.
arXiv Detail & Related papers (2021-06-23T01:08:54Z) - Cautiously Optimistic Policy Optimization and Exploration with Linear
Function Approximation [48.744735294559824]
Policy optimization methods are popular reinforcement learning algorithms, because their incremental and on-policy nature makes them more stable than the value-based counterparts.
In this paper, we propose a new algorithm, COPOE, that overcomes the sample complexity issue of PCPG while retaining its robustness to model misspecification.
The result is an improvement in sample complexity from $widetildeO (1/epsilon11)$ for PCPG to $widetildeO (1/epsilon3)$ for PCPG, nearly bridging the gap with value-based techniques.
arXiv Detail & Related papers (2021-03-24T01:42:59Z) - On the Convergence and Sample Efficiency of Variance-Reduced Policy
Gradient Method [38.34416337932712]
Policy gives rise to a rich class of reinforcement learning (RL) methods, for example the REINFORCE.
Yet the best known sample complexity result for such methods to find an $epsilon$-optimal policy is $mathcalO(epsilon-3)$, which is suboptimal.
We study the fundamental convergence properties and sample efficiency of first-order policy optimization method.
arXiv Detail & Related papers (2021-02-17T07:06:19Z) - Provably Efficient Exploration in Policy Optimization [117.09887790160406]
This paper proposes an Optimistic variant of the Proximal Policy Optimization algorithm (OPPO)
OPPO achieves $tildeO(sqrtd2 H3 T )$ regret.
To the best of our knowledge, OPPO is the first provably efficient policy optimization algorithm that explores.
arXiv Detail & Related papers (2019-12-12T08:40:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.