Reinforcement Learning-assisted Evolutionary Algorithm: A Survey and
Research Opportunities
- URL: http://arxiv.org/abs/2308.13420v3
- Date: Sun, 28 Jan 2024 02:06:36 GMT
- Title: Reinforcement Learning-assisted Evolutionary Algorithm: A Survey and
Research Opportunities
- Authors: Yanjie Song, Yutong Wu, Yangyang Guo, Ran Yan, P. N. Suganthan, Yue
Zhang, Witold Pedrycz, Swagatam Das, Rammohan Mallipeddi, Oladayo Solomon
Ajani. Qiang Feng
- Abstract summary: Reinforcement learning integrated as a component in the evolutionary algorithm has demonstrated superior performance in recent years.
We discuss the RL-EA integration method, the RL-assisted strategy adopted by RL-EA, and its applications according to the existing literature.
In the applications of RL-EA section, we also demonstrate the excellent performance of RL-EA on several benchmarks and a range of public datasets.
- Score: 63.258517066104446
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Evolutionary algorithms (EA), a class of stochastic search methods based on
the principles of natural evolution, have received widespread acclaim for their
exceptional performance in various real-world optimization problems. While
researchers worldwide have proposed a wide variety of EAs, certain limitations
remain, such as slow convergence speed and poor generalization capabilities.
Consequently, numerous scholars actively explore improvements to algorithmic
structures, operators, search patterns, etc., to enhance their optimization
performance. Reinforcement learning (RL) integrated as a component in the EA
framework has demonstrated superior performance in recent years. This paper
presents a comprehensive survey on integrating reinforcement learning into the
evolutionary algorithm, referred to as reinforcement learning-assisted
evolutionary algorithm (RL-EA). We begin with the conceptual outlines of
reinforcement learning and the evolutionary algorithm. We then provide a
taxonomy of RL-EA. Subsequently, we discuss the RL-EA integration method, the
RL-assisted strategy adopted by RL-EA, and its applications according to the
existing literature. The RL-assisted procedure is divided according to the
implemented functions including solution generation, learnable objective
function, algorithm/operator/sub-population selection, parameter adaptation,
and other strategies. Additionally, different attribute settings of RL in RL-EA
are discussed. In the applications of RL-EA section, we also demonstrate the
excellent performance of RL-EA on several benchmarks and a range of public
datasets to facilitate a quick comparative study. Finally, we analyze potential
directions for future research.
Related papers
- Generative AI for Deep Reinforcement Learning: Framework, Analysis, and Use Cases [60.30995339585003]
Deep reinforcement learning (DRL) has been widely applied across various fields and has achieved remarkable accomplishments.
DRL faces certain limitations, including low sample efficiency and poor generalization.
We present how to leverage generative AI (GAI) to address these issues and enhance the performance of DRL algorithms.
arXiv Detail & Related papers (2024-05-31T01:25:40Z) - Bridging Evolutionary Algorithms and Reinforcement Learning: A Comprehensive Survey on Hybrid Algorithms [50.91348344666895]
Evolutionary Reinforcement Learning (ERL) integrates Evolutionary Algorithms (EAs) and Reinforcement Learning (RL) for optimization.
This survey offers a comprehensive overview of the diverse research branches in ERL.
arXiv Detail & Related papers (2024-01-22T14:06:37Z) - Discovering General Reinforcement Learning Algorithms with Adversarial
Environment Design [54.39859618450935]
We show that it is possible to meta-learn update rules, with the hope of discovering algorithms that can perform well on a wide range of RL tasks.
Despite impressive initial results from algorithms such as Learned Policy Gradient (LPG), there remains a gap when these algorithms are applied to unseen environments.
In this work, we examine how characteristics of the meta-supervised-training distribution impact the performance of these algorithms.
arXiv Detail & Related papers (2023-10-04T12:52:56Z) - BiERL: A Meta Evolutionary Reinforcement Learning Framework via Bilevel
Optimization [34.24884427152513]
We propose a general meta ERL framework via bilevel optimization (BiERL)
We design an elegant meta-level architecture that embeds the inner-level's evolving experience into an informative population representation.
We perform extensive experiments in MuJoCo and Box2D tasks to verify that as a general framework, BiERL outperforms various baselines and consistently improves the learning performance for a diversity of ERL algorithms.
arXiv Detail & Related papers (2023-08-01T09:31:51Z) - Provable Reward-Agnostic Preference-Based Reinforcement Learning [61.39541986848391]
Preference-based Reinforcement Learning (PbRL) is a paradigm in which an RL agent learns to optimize a task using pair-wise preference-based feedback over trajectories.
We propose a theoretical reward-agnostic PbRL framework where exploratory trajectories that enable accurate learning of hidden reward functions are acquired.
arXiv Detail & Related papers (2023-05-29T15:00:09Z) - Evolutionary Reinforcement Learning: A Survey [31.112066295496003]
Reinforcement learning (RL) is a machine learning approach that trains agents to maximize cumulative rewards through interactions with environments.
This article presents a comprehensive survey of state-of-the-art methods for integrating EC into RL, referred to as evolutionary reinforcement learning (EvoRL)
arXiv Detail & Related papers (2023-03-07T01:38:42Z) - Ensemble Reinforcement Learning: A Survey [43.17635633600716]
Reinforcement Learning (RL) has emerged as a highly effective technique for addressing various scientific and applied problems.
In response, ensemble reinforcement learning (ERL), a promising approach that combines the benefits of both RL and ensemble learning (EL), has gained widespread popularity.
ERL leverages multiple models or training algorithms to comprehensively explore the problem space and possesses strong generalization capabilities.
arXiv Detail & Related papers (2023-03-05T09:26:44Z) - Ensemble Reinforcement Learning in Continuous Spaces -- A Hierarchical
Multi-Step Approach for Policy Training [4.982806898121435]
We propose a new technique to train an ensemble of base learners based on an innovative multi-step integration method.
This training technique enables us to develop a new hierarchical learning algorithm for ensemble DRL that effectively promotes inter-learner collaboration.
The algorithm is also shown empirically to outperform several state-of-the-art DRL algorithms on multiple benchmark RL problems.
arXiv Detail & Related papers (2022-09-29T00:42:44Z) - Behavioral Priors and Dynamics Models: Improving Performance and Domain
Transfer in Offline RL [82.93243616342275]
We introduce Offline Model-based RL with Adaptive Behavioral Priors (MABE)
MABE is based on the finding that dynamics models, which support within-domain generalization, and behavioral priors, which support cross-domain generalization, are complementary.
In experiments that require cross-domain generalization, we find that MABE outperforms prior methods.
arXiv Detail & Related papers (2021-06-16T20:48:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.