Benchmarking a $(\mu+\lambda)$ Genetic Algorithm with Configurable
Crossover Probability
- URL: http://arxiv.org/abs/2006.05889v1
- Date: Wed, 10 Jun 2020 15:22:44 GMT
- Title: Benchmarking a $(\mu+\lambda)$ Genetic Algorithm with Configurable
Crossover Probability
- Authors: Furong Ye and Hao Wang and Carola Doerr and Thomas B\"ack
- Abstract summary: We investigate a family of $(mu+lambda)$ Genetic Algorithms (GAs) which creates offspring either from mutation or by recombining two randomly chosen parents.
By scaling the crossover probability, we can thus interpolate from a fully mutation-only algorithm towards a fully crossover-based GA.
- Score: 4.33419118449588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate a family of $(\mu+\lambda)$ Genetic Algorithms (GAs) which
creates offspring either from mutation or by recombining two randomly chosen
parents. By scaling the crossover probability, we can thus interpolate from a
fully mutation-only algorithm towards a fully crossover-based GA. We analyze,
by empirical means, how the performance depends on the interplay of population
size and the crossover probability.
Our comparison on 25 pseudo-Boolean optimization problems reveals an
advantage of crossover-based configurations on several easy optimization tasks,
whereas the picture for more complex optimization problems is rather mixed.
Moreover, we observe that the ``fast'' mutation scheme with its are power-law
distributed mutation strengths outperforms standard bit mutation on complex
optimization tasks when it is combined with crossover, but performs worse in
the absence of crossover.
We then take a closer look at the surprisingly good performance of the
crossover-based $(\mu+\lambda)$ GAs on the well-known LeadingOnes benchmark
problem. We observe that the optimal crossover probability increases with
increasing population size $\mu$. At the same time, it decreases with
increasing problem dimension, indicating that the advantages of the crossover
are not visible in the asymptotic view classically applied in runtime analysis.
We therefore argue that a mathematical investigation for fixed dimensions might
help us observe effects which are not visible when focusing exclusively on
asymptotic performance bounds.
Related papers
- Analyzing the Runtime of the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) on the Concatenated Trap Function [2.038038953957366]
GOMEA is an evolutionary algorithm that leverages linkage learning to efficiently exploit problem structure.
We show that GOMEA can solve the problem in $O(m32k)$ with high probability, where $m$ is the number of subfunctions and $k$ is the subfunction length.
This is a significant speedup compared to the (1+1) Evolutionary EA, which requires $O(ln(m)(mk)k)$ expected evaluations.
arXiv Detail & Related papers (2024-07-11T09:37:21Z) - Already Moderate Population Sizes Provably Yield Strong Robustness to Noise [53.27802701790209]
We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
arXiv Detail & Related papers (2024-04-02T16:35:52Z) - Dynastic Potential Crossover Operator [2.908482270923597]
An optimal recombination operator is optimal in the smallest hyperplane containing the two parent solutions.
We present a recombination operator, called Dynastic Potential Crossover (DPX), that runs in partition time and behaves like an optimal recombination operator for low-epistasis problems.
arXiv Detail & Related papers (2024-02-06T11:38:23Z) - Faster Convergence with Multiway Preferences [99.68922143784306]
We consider the sign-function-based comparison feedback model and analyze the convergence rates with batched and multiway comparisons.
Our work is the first to study the problem of convex optimization with multiway preferences and analyze the optimal convergence rates.
arXiv Detail & Related papers (2023-12-19T01:52:13Z) - Distributed Extra-gradient with Optimal Complexity and Communication
Guarantees [60.571030754252824]
We consider monotone variational inequality (VI) problems in multi-GPU settings where multiple processors/workers/clients have access to local dual vectors.
Extra-gradient, which is a de facto algorithm for monotone VI problems, has not been designed to be communication-efficient.
We propose a quantized generalized extra-gradient (Q-GenX), which is an unbiased and adaptive compression method tailored to solve VIs.
arXiv Detail & Related papers (2023-08-17T21:15:04Z) - Crossover Can Guarantee Exponential Speed-Ups in Evolutionary
Multi-Objective Optimisation [4.212663349859165]
We provide a theoretical analysis of the well-known EMO algorithms GSEMO and NSGA-II.
We show that immune-inspired hypermutations cannot avoid exponential optimisation times.
arXiv Detail & Related papers (2023-01-31T15:03:34Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - A Rank based Adaptive Mutation in Genetic Algorithm [0.0]
This paper presents an alternate approach of mutation probability generation using chromosome rank to avoid any susceptibility to fitness distribution.
Experiments are done to compare results of simple genetic algorithm (SGA) with constant mutation probability and adaptive approaches within a limited resource constraint.
arXiv Detail & Related papers (2021-04-18T12:41:33Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Fast Mutation in Crossover-based Algorithms [8.34061303235504]
Heavy-tailed mutation operator proposed in Doerr, Le, Makhmara and Nguyen (GECCO)
A heavy-tailed mutation operator proposed in Doerr, Le, Makhmara and Nguyen (GECCO)
An empirical study shows the effectiveness of the fast mutation also on random satisfiable Max-3SAT instances.
arXiv Detail & Related papers (2020-04-14T14:16:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.