Optimal Mutation Rates for the $(1+\lambda)$ EA on OneMax
- URL: http://arxiv.org/abs/2006.11457v1
- Date: Sat, 20 Jun 2020 01:23:14 GMT
- Title: Optimal Mutation Rates for the $(1+\lambda)$ EA on OneMax
- Authors: Maxim Buzdalov and Carola Doerr
- Abstract summary: We extend the analysis of optimal mutation rates to two variants of the OneMax problem.
We compute for all population sizes $lambda in 2i mid 0 le i le 18$ which mutation rates minimize the expected running time.
Our results do not only provide a lower bound against which we can measure common evolutionary approaches.
- Score: 1.0965065178451106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The OneMax problem, alternatively known as the Hamming distance problem, is
often referred to as the "drosophila of evolutionary computation (EC)", because
of its high relevance in theoretical and empirical analyses of EC approaches.
It is therefore surprising that even for the simplest of all mutation-based
algorithms, Randomized Local Search and the (1+1) EA, the optimal mutation
rates were determined only very recently, in a GECCO 2019 poster.
In this work, we extend the analysis of optimal mutation rates to two
variants of the $(1+\lambda)$ EA and to the $(1+\lambda)$ RLS. To do this, we
use dynamic programming and, for the $(1+\lambda)$ EA, numeric optimization,
both requiring $\Theta(n^3)$ time for problem dimension $n$. With this in hand,
we compute for all population sizes $\lambda \in \{2^i \mid 0 \le i \le 18\}$
and for problem dimension $n \in \{1000, 2000, 5000\}$ which mutation rates
minimize the expected running time and which ones maximize the expected
progress.
Our results do not only provide a lower bound against which we can measure
common evolutionary approaches, but we also obtain insight into the structure
of these optimal parameter choices. For example, we show that, for large
population sizes, the best number of bits to flip is not monotone in the
distance to the optimum. We also observe that the expected remaining running
time are not necessarily unimodal for the $(1+\lambda)$ EA$_{0 \rightarrow 1}$
with shifted mutation.
Related papers
- Nearly Minimax Optimal Regret for Learning Linear Mixture Stochastic
Shortest Path [80.60592344361073]
We study the Shortest Path (SSP) problem with a linear mixture transition kernel.
An agent repeatedly interacts with a environment and seeks to reach certain goal state while minimizing the cumulative cost.
Existing works often assume a strictly positive lower bound of the iteration cost function or an upper bound of the expected length for the optimal policy.
arXiv Detail & Related papers (2024-02-14T07:52:00Z) - Kernelized Normalizing Constant Estimation: Bridging Bayesian Quadrature
and Bayesian Optimization [51.533164528799084]
We show that to estimate the normalizing constant within a small relative error, the level of difficulty depends on the value of $lambda$.
We find that this pattern holds true even when the function evaluations are noisy.
arXiv Detail & Related papers (2024-01-11T07:45:09Z) - Hardest Monotone Functions for Evolutionary Algorithms [0.0]
We conjecture that for the self-adjusting $(1,lambda)$-EA, Adrial Dynamic BinVal (ADBV) is the hardest dynamic monotone function to optimize.
We introduce the function Switching Dynamic BinVal (SDBV) which coincides with ADBV whenever the number of remaining zeros in the search point is strictly less than $n/2$.
arXiv Detail & Related papers (2023-11-13T16:13:30Z) - Tight Runtime Bounds for Static Unary Unbiased Evolutionary Algorithms on Linear Functions [0.44241702149260353]
Witt showed that the (1+1) Evolutionary Algorithm with standard bit mutation needs time to find the optimum of any linear function.
We investigate how this result generalizes if standard bit mutation is replaced by an arbitrary unbiased mutation operator.
arXiv Detail & Related papers (2023-02-23T21:09:15Z) - Fourier Analysis Meets Runtime Analysis: Precise Runtimes on Plateaus [9.853329403413701]
We propose a new method based on discrete Fourier analysis to analyze the time evolutionary algorithms spend on plateaus.
We also use this method to analyze the runtime of the $(1+1)$ evolutionary algorithm on a new benchmark consisting of $n/ell$ plateaus of effective size $2ell-1$.
arXiv Detail & Related papers (2023-02-16T01:46:06Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Optimal Query Complexities for Dynamic Trace Estimation [59.032228008383484]
We consider the problem of minimizing the number of matrix-vector queries needed for accurate trace estimation in the dynamic setting where our underlying matrix is changing slowly.
We provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $epsilon$ error with $delta$ failure probability.
Our lower bounds (1) give the first tight bounds for Hutchinson's estimator in the matrix-vector product model with Frobenius norm error even in the static setting, and (2) are the first unconditional lower bounds for dynamic trace estimation.
arXiv Detail & Related papers (2022-09-30T04:15:44Z) - When Non-Elitism Meets Time-Linkage Problems [19.798298260257432]
We analyze on the influence of the non-elitism via comparing the performance of the elitist (1+$lambda$)EA and its non-elitist counterpart (1,$lambda$)EA.
We prove that with probability $1$, (1,$lambda$)EA can reach the global optimum and its expected runtime is $O(n3+clog n)$ with $lambda=c log_fracee-1 n$ for the constant $cge 1$.
arXiv Detail & Related papers (2021-04-14T13:03:42Z) - Runtime analysis of the (mu+1)-EA on the Dynamic BinVal function [0.0]
We study evolutionary algorithms in a dynamic setting, where for each generation a different fitness function is chosen.
In particular, the hardest region for optimization is NOT around the optimum.
arXiv Detail & Related papers (2020-10-26T08:55:53Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.