On-Policy Policy Gradient Reinforcement Learning Without On-Policy
Sampling
- URL: http://arxiv.org/abs/2311.08290v1
- Date: Tue, 14 Nov 2023 16:37:28 GMT
- Title: On-Policy Policy Gradient Reinforcement Learning Without On-Policy
Sampling
- Authors: Nicholas E. Corrado, Josiah P. Hanna
- Abstract summary: We introduce an adaptive, off-policy sampling method to improve the data efficiency of on-policy policy gradient algorithms.
Our method, Proximal Robust On-Policy Sampling (PROPS), reduces sampling error by collecting data with a behavior policy.
Our work improves the RL community's understanding of a nuance in the on-policy vs off-policy dichotomy: on-policy learning requires on-policy data, not on-policy sampling.
- Score: 4.164735640341406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-policy reinforcement learning (RL) algorithms perform policy updates using
i.i.d. trajectories collected by the current policy. However, after observing
only a finite number of trajectories, on-policy sampling may produce data that
fails to match the expected on-policy data distribution. This sampling error
leads to noisy updates and data inefficient on-policy learning. Recent work in
the policy evaluation setting has shown that non-i.i.d., off-policy sampling
can produce data with lower sampling error than on-policy sampling can produce.
Motivated by this observation, we introduce an adaptive, off-policy sampling
method to improve the data efficiency of on-policy policy gradient algorithms.
Our method, Proximal Robust On-Policy Sampling (PROPS), reduces sampling error
by collecting data with a behavior policy that increases the probability of
sampling actions that are under-sampled with respect to the current policy.
Rather than discarding data from old policies -- as is commonly done in
on-policy algorithms -- PROPS uses data collection to adjust the distribution
of previously collected data to be approximately on-policy. We empirically
evaluate PROPS on both continuous-action MuJoCo benchmark tasks as well as
discrete-action tasks and demonstrate that (1) PROPS decreases sampling error
throughout training and (2) improves the data efficiency of on-policy policy
gradient algorithms. Our work improves the RL community's understanding of a
nuance in the on-policy vs off-policy dichotomy: on-policy learning requires
on-policy data, not on-policy sampling.
Related papers
- Policy Gradient with Active Importance Sampling [55.112959067035916]
Policy gradient (PG) methods significantly benefit from IS, enabling the effective reuse of previously collected samples.
However, IS is employed in RL as a passive tool for re-weighting historical samples.
We look for the best behavioral policy from which to collect samples to reduce the policy gradient variance.
arXiv Detail & Related papers (2024-05-09T09:08:09Z) - Projected Off-Policy Q-Learning (POP-QL) for Stabilizing Offline
Reinforcement Learning [57.83919813698673]
Projected Off-Policy Q-Learning (POP-QL) is a novel actor-critic algorithm that simultaneously reweights off-policy samples and constrains the policy to prevent divergence and reduce value-approximation error.
In our experiments, POP-QL not only shows competitive performance on standard benchmarks, but also out-performs competing methods in tasks where the data-collection policy is significantly sub-optimal.
arXiv Detail & Related papers (2023-11-25T00:30:58Z) - Statistically Efficient Variance Reduction with Double Policy Estimation
for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning [53.97273491846883]
We propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation.
We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks.
arXiv Detail & Related papers (2023-08-28T20:46:07Z) - Iteratively Refined Behavior Regularization for Offline Reinforcement
Learning [57.10922880400715]
In this paper, we propose a new algorithm that substantially enhances behavior-regularization based on conservative policy iteration.
By iteratively refining the reference policy used for behavior regularization, conservative policy update guarantees gradually improvement.
Experimental results on the D4RL benchmark indicate that our method outperforms previous state-of-the-art baselines in most tasks.
arXiv Detail & Related papers (2023-06-09T07:46:24Z) - Mitigating Off-Policy Bias in Actor-Critic Methods with One-Step
Q-learning: A Novel Correction Approach [0.0]
We introduce a novel policy similarity measure to mitigate the effects of such discrepancy in continuous control.
Our method offers an adequate single-step off-policy correction that is applicable to deterministic policy networks.
arXiv Detail & Related papers (2022-08-01T11:33:12Z) - Sigmoidally Preconditioned Off-policy Learning:a new exploration method
for reinforcement learning [14.991913317341417]
We focus on an off-policy Actor-Critic architecture, and propose a novel method, called Preconditioned Proximal Policy Optimization (P3O)
P3O can control the high variance of importance sampling by applying a preconditioner to the Conservative Policy Iteration (CPI) objective.
Results show that our P3O maximizes the CPI objective better than PPO during the training process.
arXiv Detail & Related papers (2022-05-20T09:38:04Z) - Latent-Variable Advantage-Weighted Policy Optimization for Offline RL [70.01851346635637]
offline reinforcement learning methods hold the promise of learning policies from pre-collected datasets without the need to query the environment for new transitions.
In practice, offline datasets are often heterogeneous, i.e., collected in a variety of scenarios.
We propose to leverage latent-variable policies that can represent a broader class of policy distributions.
Our method improves the average performance of the next best-performing offline reinforcement learning methods by 49% on heterogeneous datasets.
arXiv Detail & Related papers (2022-03-16T21:17:03Z) - Robust On-Policy Data Collection for Data-Efficient Policy Evaluation [7.745028845389033]
In policy evaluation, the task is to estimate the expected return of an evaluation policy on an environment of interest.
We consider a setting where we can collect a small amount of additional data to combine with a potentially larger offline RL dataset.
We show that simply running the evaluation policy -- on-policy data collection -- is sub-optimal for this setting.
arXiv Detail & Related papers (2021-11-29T14:30:26Z) - Off-Policy Evaluation of Bandit Algorithm from Dependent Samples under
Batch Update Policy [8.807587076209566]
The goal of off-policy evaluation (OPE) is to evaluate a new policy using historical data obtained via a behavior policy.
Because the contextual bandit updates the policy based on past observations, the samples are not independent and identically distributed.
This paper tackles this problem by constructing an estimator from a martingale difference sequence (MDS) for the dependent samples.
arXiv Detail & Related papers (2020-10-23T15:22:57Z) - DDPG++: Striving for Simplicity in Continuous-control Off-Policy
Reinforcement Learning [95.60782037764928]
We show that simple Deterministic Policy Gradient works remarkably well as long as the overestimation bias is controlled.
Second, we pinpoint training instabilities, typical of off-policy algorithms, to the greedy policy update step.
Third, we show that ideas in the propensity estimation literature can be used to importance-sample transitions from replay buffer and update policy to prevent deterioration of performance.
arXiv Detail & Related papers (2020-06-26T20:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.