Decaying Clipping Range in Proximal Policy Optimization
- URL: http://arxiv.org/abs/2102.10456v1
- Date: Sat, 20 Feb 2021 22:08:05 GMT
- Title: Decaying Clipping Range in Proximal Policy Optimization
- Authors: M\'onika Farsang and Luca Szegletes
- Abstract summary: Proximal Policy Optimization (PPO) is among the most widely used algorithms in reinforcement learning.
Keys to its success are the reliable policy updates through the clipping mechanism and the multiple epochs of minibatch updates.
We propose linearly and exponentially decaying clipping range approaches throughout the training.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Proximal Policy Optimization (PPO) is among the most widely used algorithms
in reinforcement learning, which achieves state-of-the-art performance in many
challenging problems. The keys to its success are the reliable policy updates
through the clipping mechanism and the multiple epochs of minibatch updates.
The aim of this research is to give new simple but effective alternatives to
the former. For this, we propose linearly and exponentially decaying clipping
range approaches throughout the training. With these, we would like to provide
higher exploration at the beginning and stronger restrictions at the end of the
learning phase. We investigate their performance in several classical control
and locomotive robotic environments. During the analysis, we found that they
influence the achieved rewards and are effective alternatives to the constant
clipping method in many reinforcement learning tasks.
Related papers
- Preference-Guided Reinforcement Learning for Efficient Exploration [7.83845308102632]
We introduce LOPE: Learning Online with trajectory Preference guidancE, an end-to-end preference-guided RL framework.
Our intuition is that LOPE directly adjusts the focus of online exploration by considering human feedback as guidance.
LOPE outperforms several state-of-the-art methods regarding convergence rate and overall performance.
arXiv Detail & Related papers (2024-07-09T02:11:12Z) - Learning Diverse Policies with Soft Self-Generated Guidance [2.9602904918952695]
Reinforcement learning with sparse and deceptive rewards is challenging because non-zero rewards are rarely obtained.
This paper develops an approach that uses diverse past trajectories for faster and more efficient online RL.
arXiv Detail & Related papers (2024-02-07T02:53:50Z) - Action-Quantized Offline Reinforcement Learning for Robotic Skill
Learning [68.16998247593209]
offline reinforcement learning (RL) paradigm provides recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.
In this paper, we propose an adaptive scheme for action quantization.
We show that several state-of-the-art offline RL methods such as IQL, CQL, and BRAC improve in performance on benchmarks when combined with our proposed discretization scheme.
arXiv Detail & Related papers (2023-10-18T06:07:10Z) - Statistically Efficient Variance Reduction with Double Policy Estimation
for Off-Policy Evaluation in Sequence-Modeled Reinforcement Learning [53.97273491846883]
We propose DPE: an RL algorithm that blends offline sequence modeling and offline reinforcement learning with Double Policy Estimation.
We validate our method in multiple tasks of OpenAI Gym with D4RL benchmarks.
arXiv Detail & Related papers (2023-08-28T20:46:07Z) - SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended
Exploration [21.764280583041703]
Skill reuse is one of the most common approaches, but current methods have considerable limitations.
We introduce an alternative approach to mitigate these problems.
Our approach learns to sequence existing temporally-extended skills for exploration but learns the final policy directly from the raw experience.
arXiv Detail & Related papers (2022-11-24T18:05:01Z) - Meta Reinforcement Learning with Successor Feature Based Context [51.35452583759734]
We propose a novel meta-RL approach that achieves competitive performance comparing to existing meta-RL algorithms.
Our method does not only learn high-quality policies for multiple tasks simultaneously but also can quickly adapt to new tasks with a small amount of training.
arXiv Detail & Related papers (2022-07-29T14:52:47Z) - On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning [71.55412580325743]
We show that multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation.
This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL.
arXiv Detail & Related papers (2022-06-07T13:24:00Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z) - DDPG++: Striving for Simplicity in Continuous-control Off-Policy
Reinforcement Learning [95.60782037764928]
We show that simple Deterministic Policy Gradient works remarkably well as long as the overestimation bias is controlled.
Second, we pinpoint training instabilities, typical of off-policy algorithms, to the greedy policy update step.
Third, we show that ideas in the propensity estimation literature can be used to importance-sample transitions from replay buffer and update policy to prevent deterioration of performance.
arXiv Detail & Related papers (2020-06-26T20:21:12Z) - SOAC: The Soft Option Actor-Critic Architecture [25.198302636265286]
Methods have been proposed for concurrently learning low-level intra-option policies and high-level option selection policy.
Existing methods typically suffer from two major challenges: ineffective exploration and unstable updates.
We present a novel and stable off-policy approach that builds on the maximum entropy model to address these challenges.
arXiv Detail & Related papers (2020-06-25T13:06:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.