Towards Applicable Reinforcement Learning: Improving the Generalization
and Sample Efficiency with Policy Ensemble
- URL: http://arxiv.org/abs/2205.09284v1
- Date: Thu, 19 May 2022 02:25:32 GMT
- Title: Towards Applicable Reinforcement Learning: Improving the Generalization
and Sample Efficiency with Policy Ensemble
- Authors: Zhengyu Yang, Kan Ren, Xufang Luo, Minghuan Liu, Weiqing Liu, Jiang
Bian, Weinan Zhang, Dongsheng Li
- Abstract summary: It is challenging for reinforcement learning algorithms to succeed in real-world applications like financial trading and logistic system.
We propose Ensemble Proximal Policy Optimization (EPPO), which learns ensemble policies in an end-to-end manner.
EPPO achieves higher efficiency and is robust for real-world applications compared with vanilla policy optimization algorithms and other ensemble methods.
- Score: 43.95417785185457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is challenging for reinforcement learning (RL) algorithms to succeed in
real-world applications like financial trading and logistic system due to the
noisy observation and environment shifting between training and evaluation.
Thus, it requires both high sample efficiency and generalization for resolving
real-world tasks. However, directly applying typical RL algorithms can lead to
poor performance in such scenarios. Considering the great performance of
ensemble methods on both accuracy and generalization in supervised learning
(SL), we design a robust and applicable method named Ensemble Proximal Policy
Optimization (EPPO), which learns ensemble policies in an end-to-end manner.
Notably, EPPO combines each policy and the policy ensemble organically and
optimizes both simultaneously. In addition, EPPO adopts a diversity enhancement
regularization over the policy space which helps to generalize to unseen states
and promotes exploration. We theoretically prove EPPO increases exploration
efficacy, and through comprehensive experimental evaluations on various tasks,
we demonstrate that EPPO achieves higher efficiency and is robust for
real-world applications compared with vanilla policy optimization algorithms
and other ensemble methods. Code and supplemental materials are available at
https://seqml.github.io/eppo.
Related papers
- Diffusion Policy Policy Optimization [37.04382170999901]
Diffusion Policy Optimization, DPPO, is an algorithmic framework for fine-tuning diffusion-based policies.
DPO achieves the strongest overall performance and efficiency for fine-tuning in common benchmarks.
We show that DPPO takes advantage of unique synergies between RL fine-tuning and the diffusion parameterization.
arXiv Detail & Related papers (2024-09-01T02:47:50Z) - DPO: Differential reinforcement learning with application to optimal configuration search [3.2857981869020327]
Reinforcement learning with continuous state and action spaces remains one of the most challenging problems within the field.
We propose the first differential RL framework that can handle settings with limited training samples and short-length episodes.
arXiv Detail & Related papers (2024-04-24T03:11:12Z) - Surpassing legacy approaches to PWR core reload optimization with single-objective Reinforcement learning [0.0]
We have developed methods based on Deep Reinforcement Learning (DRL) for both single- and multi-objective optimization.
In this paper, we demonstrate the advantage of our RL-based approach, specifically using Proximal Policy Optimization (PPO)
PPO adapts its search capability via a policy with learnable weights, allowing it to function as both a global and local search method.
arXiv Detail & Related papers (2024-02-16T19:35:58Z) - Local Optimization Achieves Global Optimality in Multi-Agent
Reinforcement Learning [139.53668999720605]
We present a multi-agent PPO algorithm in which the local policy of each agent is updated similarly to vanilla PPO.
We prove that with standard regularity conditions on the Markov game and problem-dependent quantities, our algorithm converges to the globally optimal policy at a sublinear rate.
arXiv Detail & Related papers (2023-05-08T16:20:03Z) - Diverse Policy Optimization for Structured Action Space [59.361076277997704]
We propose Diverse Policy Optimization (DPO) to model the policies in structured action space as the energy-based models (EBM)
A novel and powerful generative model, GFlowNet, is introduced as the efficient, diverse EBM-based policy sampler.
Experiments on ATSC and Battle benchmarks demonstrate that DPO can efficiently discover surprisingly diverse policies.
arXiv Detail & Related papers (2023-02-23T10:48:09Z) - Jump-Start Reinforcement Learning [68.82380421479675]
We present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy.
In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks.
We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms.
arXiv Detail & Related papers (2022-04-05T17:25:22Z) - Semi-On-Policy Training for Sample Efficient Multi-Agent Policy
Gradients [51.749831824106046]
We introduce semi-on-policy (SOP) training as an effective and computationally efficient way to address the sample inefficiency of on-policy policy gradient methods.
We show that our methods perform as well or better than state-of-the-art value-based methods on a variety of SMAC tasks.
arXiv Detail & Related papers (2021-04-27T19:37:01Z) - Policy Information Capacity: Information-Theoretic Measure for Task
Complexity in Deep Reinforcement Learning [83.66080019570461]
We propose two environment-agnostic, algorithm-agnostic quantitative metrics for task difficulty.
We show that these metrics have higher correlations with normalized task solvability scores than a variety of alternatives.
These metrics can also be used for fast and compute-efficient optimizations of key design parameters.
arXiv Detail & Related papers (2021-03-23T17:49:50Z) - Population-Guided Parallel Policy Search for Reinforcement Learning [17.360163137926]
A new population-guided parallel learning scheme is proposed to enhance the performance of off-policy reinforcement learning (RL)
In the proposed scheme, multiple identical learners with their own value-functions and policies share a common experience replay buffer, and search a good policy in collaboration with the guidance of the best policy information.
arXiv Detail & Related papers (2020-01-09T10:13:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.