Exploitation and Exploration Analysis of Elitist Evolutionary
Algorithms: A Case Study
- URL: http://arxiv.org/abs/2001.10932v1
- Date: Wed, 29 Jan 2020 16:21:39 GMT
- Title: Exploitation and Exploration Analysis of Elitist Evolutionary
Algorithms: A Case Study
- Authors: Yu Chen and Jun He
- Abstract summary: This paper proposes to evaluate exploitation and exploration via the success probability and the one-step improvement rate computed in different domains of integration.
Case studies are performed by analyzing performances of (1+1) random uni search and (1+1) evolutionary programming on the sphere function and the cheating problem.
- Score: 6.48717002317456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Known as two cornerstones of problem solving by search, exploitation and
exploration are extensively discussed for implementation and application of
evolutionary algorithms (EAs). However, only a few researches focus on
evaluation and theoretical estimation of exploitation and exploration.
Considering that exploitation and exploration are two issues regarding global
search and local search, this paper proposes to evaluate them via the success
probability and the one-step improvement rate computed in different domains of
integration. Then, case studies are performed by analyzing performances of
(1+1) random univariate search and (1+1) evolutionary programming on the sphere
function and the cheating problem. By rigorous theoretical analysis, we
demonstrate that both exploitation and exploration of the investigated elitist
EAs degenerate exponentially with the problem dimension $n$. Meanwhile, it is
also shown that maximization of exploitation and exploration can be achieved by
setting an appropriate value for the standard deviation $\sigma$ of Gaussian
mutation, which is positively related to the distance from the present solution
to the center of the promising region.
Related papers
- Unlock the Power of Algorithm Features: A Generalization Analysis for Algorithm Selection [25.29451529910051]
We propose the first provable guarantee for algorithm selection based on algorithm features.
We analyze the benefits and costs associated with algorithm features and investigate how the generalization error is affected by different factors.
arXiv Detail & Related papers (2024-05-18T17:38:25Z) - Stability and Generalization of the Decentralized Stochastic Gradient
Descent Ascent Algorithm [80.94861441583275]
We investigate the complexity of the generalization bound of the decentralized gradient descent (D-SGDA) algorithm.
Our results analyze the impact of different top factors on the generalization of D-SGDA.
We also balance it with the generalization to obtain the optimal convex-concave setting.
arXiv Detail & Related papers (2023-10-31T11:27:01Z) - Balancing exploration and exploitation phases in whale optimization
algorithm: an insightful and empirical analysis [4.0814527055582746]
Whale optimization algorithm as a robust and well recognized metaheuristic algorithm in the literature, has proposed a novel scheme to achieve this balance.
This chapter attempts to empirically analyze the WOA algorithm in terms of the local and global search capabilities.
arXiv Detail & Related papers (2023-09-03T19:15:34Z) - On the Importance of Exploration for Generalization in Reinforcement
Learning [89.63074327328765]
We propose EDE: Exploration via Distributional Ensemble, a method that encourages exploration of states with high uncertainty.
Our algorithm is the first value-based approach to achieve state-of-the-art on both Procgen and Crafter.
arXiv Detail & Related papers (2023-06-08T18:07:02Z) - Mastering the exploration-exploitation trade-off in Bayesian
Optimization [0.2538209532048867]
The acquisition function drives the choice of the next solution to evaluate, balancing between exploration and exploitation.
This paper proposes a novel acquisition function, mastering the trade-off between explorative and exploitative choices, adaptively.
arXiv Detail & Related papers (2023-05-15T13:19:03Z) - Guarantees for Epsilon-Greedy Reinforcement Learning with Function
Approximation [69.1524391595912]
Myopic exploration policies such as epsilon-greedy, softmax, or Gaussian noise fail to explore efficiently in some reinforcement learning tasks.
This paper presents a theoretical analysis of such policies and provides the first regret and sample-complexity bounds for reinforcement learning with myopic exploration.
arXiv Detail & Related papers (2022-06-19T14:44:40Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Task-Optimal Exploration in Linear Dynamical Systems [29.552894877883883]
We study task-guided exploration and determine what precisely an agent must learn about their environment in order to complete a task.
We provide instance- and task-dependent lower bounds which explicitly quantify the difficulty of completing a task of interest.
We show that it optimally explores the environment, collecting precisely the information needed to complete the task, and provide finite-time bounds guaranteeing that it achieves the instance- and task-optimal sample complexity.
arXiv Detail & Related papers (2021-02-10T01:42:22Z) - Sparse Reward Exploration via Novelty Search and Emitters [55.41644538483948]
We introduce the SparsE Reward Exploration via Novelty and Emitters (SERENE) algorithm.
SERENE separates the search space exploration and reward exploitation into two alternating processes.
A meta-scheduler allocates a global computational budget by alternating between the two processes.
arXiv Detail & Related papers (2021-02-05T12:34:54Z) - Geometric Entropic Exploration [52.67987687712534]
We introduce a new algorithm that maximises the geometry-aware Shannon entropy of state-visits in both discrete and continuous domains.
Our key theoretical contribution is casting geometry-aware MSVE exploration as a tractable problem of optimising a simple and novel noise-contrastive objective function.
In our experiments, we show the efficiency of GEM in solving several RL problems with sparse rewards, compared against other deep RL exploration approaches.
arXiv Detail & Related papers (2021-01-06T14:15:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.