From Understanding Genetic Drift to a Smart-Restart Parameter-less
Compact Genetic Algorithm
- URL: http://arxiv.org/abs/2004.07141v3
- Date: Wed, 9 Jun 2021 07:16:21 GMT
- Title: From Understanding Genetic Drift to a Smart-Restart Parameter-less
Compact Genetic Algorithm
- Authors: Benjamin Doerr and Weijie Zheng
- Abstract summary: In the regime with no genetic drift, the runtime is roughly proportional to the population size.
We propose a parameter-less version of the compact genetic algorithm that automatically finds a suitable population size.
- Score: 15.56430085052365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the key difficulties in using estimation-of-distribution algorithms is
choosing the population size(s) appropriately: Too small values lead to genetic
drift, which can cause enormous difficulties. In the regime with no genetic
drift, however, often the runtime is roughly proportional to the population
size, which renders large population sizes inefficient.
Based on a recent quantitative analysis which population sizes lead to
genetic drift, we propose a parameter-less version of the compact genetic
algorithm that automatically finds a suitable population size without spending
too much time in situations unfavorable due to genetic drift.
We prove a mathematical runtime guarantee for this algorithm and conduct an
extensive experimental analysis on four classic benchmark problems both without
and with additive centered Gaussian posterior noise. The former shows that
under a natural assumption, our algorithm has a performance very similar to the
one obtainable from the best problem-specific population size. The latter
confirms that missing the right population size in the original cGA can be
detrimental and that previous theory-based suggestions for the population size
can be far away from the right values; it also shows that our algorithm as well
as a previously proposed parameter-less variant of the cGA based on parallel
runs avoid such pitfalls. Comparing the two parameter-less approaches, ours
profits from its ability to abort runs which are likely to be stuck in a
genetic drift situation.
Related papers
- Already Moderate Population Sizes Provably Yield Strong Robustness to Noise [53.27802701790209]
We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
arXiv Detail & Related papers (2024-04-02T16:35:52Z) - Evaluating Genetic Algorithms through the Approximability Hierarchy [55.938644481736446]
In this paper, we analyze the usefulness of using genetic algorithms depending on the approximation class the problem belongs to.
In particular, we use the standard approximability hierarchy, showing that genetic algorithms are especially useful for the most pessimistic classes of the hierarchy.
arXiv Detail & Related papers (2024-02-01T09:18:34Z) - Improving genetic algorithms performance via deterministic population
shrinkage [9.334663477968027]
This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing scheme on the performance of Genetic Algorithms (GAs)
It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter.
Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.
arXiv Detail & Related papers (2024-01-22T17:05:16Z) - Larger Offspring Populations Help the $(1 + (\lambda, \lambda))$ Genetic
Algorithm to Overcome the Noise [76.24156145566425]
Evolutionary algorithms are known to be robust to noise in the evaluation of the fitness.
We analyze to what extent the $(lambda,lambda)$ genetic algorithm is robust to noise.
arXiv Detail & Related papers (2023-05-08T08:49:01Z) - From Understanding Genetic Drift to a Smart-Restart Mechanism for
Estimation-of-Distribution Algorithms [16.904475483445452]
We develop a smart-restart mechanism for Estimation-of-distribution algorithms (EDAs)
By stopping runs when the risk for genetic drift is high, it automatically runs the EDA in good parameter regimes.
We show that the smart-restart mechanism finds much better values for the population size than those suggested in the literature.
arXiv Detail & Related papers (2022-06-18T02:46:52Z) - A Dimensionality Reduction Method for Finding Least Favorable Priors
with a Focus on Bregman Divergence [108.28566246421742]
This paper develops a dimensionality reduction method that allows us to move the optimization to a finite-dimensional setting with an explicit bound on the dimension.
In order to make progress on the problem, we restrict ourselves to Bayesian risks induced by a relatively large class of loss functions, namely Bregman divergences.
arXiv Detail & Related papers (2022-02-23T16:22:28Z) - The Univariate Marginal Distribution Algorithm Copes Well With Deception
and Epistasis [9.853329403413701]
We show that the negative finding is caused by an unfortunate choice of the parameters of the UMDA.
Our result shows that the UMDA can cope better with local optima than evolutionary algorithms.
arXiv Detail & Related papers (2020-07-16T12:07:09Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Lower Bounds for Non-Elitist Evolutionary Algorithms via Negative
Multiplicative Drift [9.853329403413701]
We show that a simple negative drift theorem for multiplicative drift scenarios can simplify existing analyses.
We discuss in more detail Lehre's (PPSN 2010) emphnegative drift in populations method, one of the most general tools to prove lower bounds on the runtime of non-elitist mutation-based evolutionary algorithms.
arXiv Detail & Related papers (2020-05-02T15:10:09Z) - A Simplified Run Time Analysis of the Univariate Marginal Distribution
Algorithm on LeadingOnes [9.853329403413701]
We prove a stronger run time guarantee for the univariate distribution algorithm (UMDA)
We show further run time gains from small selection rates.
Under similar assumptions, we prove that a bound that matches our upper bound up to constant factors holds with high probability.
arXiv Detail & Related papers (2020-04-10T10:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.