Fourier Analysis Meets Runtime Analysis: Precise Runtimes on Plateaus
- URL: http://arxiv.org/abs/2302.08021v3
- Date: Tue, 2 May 2023 01:20:27 GMT
- Title: Fourier Analysis Meets Runtime Analysis: Precise Runtimes on Plateaus
- Authors: Benjamin Doerr, Andrew James Kelley
- Abstract summary: We propose a new method based on discrete Fourier analysis to analyze the time evolutionary algorithms spend on plateaus.
We also use this method to analyze the runtime of the $(1+1)$ evolutionary algorithm on a new benchmark consisting of $n/ell$ plateaus of effective size $2ell-1$.
- Score: 9.853329403413701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new method based on discrete Fourier analysis to analyze the
time evolutionary algorithms spend on plateaus. This immediately gives a
concise proof of the classic estimate of the expected runtime of the $(1+1)$
evolutionary algorithm on the Needle problem due to Garnier, Kallel, and
Schoenauer (1999).
We also use this method to analyze the runtime of the $(1+1)$ evolutionary
algorithm on a new benchmark consisting of $n/\ell$ plateaus of effective size
$2^\ell-1$ which have to be optimized sequentially in a LeadingOnes fashion.
Using our new method, we determine the precise expected runtime both for
static and fitness-dependent mutation rates. We also determine the
asymptotically optimal static and fitness-dependent mutation rates. For $\ell =
o(n)$, the optimal static mutation rate is approximately $1.59/n$. The optimal
fitness dependent mutation rate, when the first $k$ fitness-relevant bits have
been found, is asymptotically $1/(k+1)$. These results, so far only proven for
the single-instance problem LeadingOnes, thus hold for a much broader class of
problems. We expect similar extensions to be true for other important results
on LeadingOnes. We are also optimistic that our Fourier analysis approach can
be applied to other plateau problems as well.
Related papers
- Already Moderate Population Sizes Provably Yield Strong Robustness to Noise [53.27802701790209]
We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
arXiv Detail & Related papers (2024-04-02T16:35:52Z) - Gradient-free optimization of highly smooth functions: improved analysis
and a new algorithm [87.22224691317766]
This work studies problems with zero-order noisy oracle information under the assumption that the objective function is highly smooth.
We consider two kinds of zero-order projected gradient descent algorithms.
arXiv Detail & Related papers (2023-06-03T17:05:13Z) - Larger Offspring Populations Help the $(1 + (\lambda, \lambda))$ Genetic
Algorithm to Overcome the Noise [76.24156145566425]
Evolutionary algorithms are known to be robust to noise in the evaluation of the fitness.
We analyze to what extent the $(lambda,lambda)$ genetic algorithm is robust to noise.
arXiv Detail & Related papers (2023-05-08T08:49:01Z) - Stochastic Nonsmooth Convex Optimization with Heavy-Tailed Noises:
High-Probability Bound, In-Expectation Rate and Initial Distance Adaptation [22.758674468435302]
In a heavy-tailed noise regime, the difference between the gradient and the true rate is assumed to have a finite $p-th moment.
This paper provides a comprehensive analysis of nonsmooth convex optimization with heavy-tailed noises.
arXiv Detail & Related papers (2023-03-22T03:05:28Z) - Runtime Analysis for Permutation-based Evolutionary Algorithms [9.044970217182117]
We show that a heavy-tailed version of the scramble operator, as in the bit-string case, leads to a speed-up of order $mTheta(m)$ on jump functions with jump size $m$.
A short empirical analysis confirms these findings, but also reveals that small implementation details like the rate of void mutations can make an important difference.
arXiv Detail & Related papers (2022-07-05T15:49:29Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Runtime analysis of the (mu+1)-EA on the Dynamic BinVal function [0.0]
We study evolutionary algorithms in a dynamic setting, where for each generation a different fitness function is chosen.
In particular, the hardest region for optimization is NOT around the optimum.
arXiv Detail & Related papers (2020-10-26T08:55:53Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Optimal Mutation Rates for the $(1+\lambda)$ EA on OneMax [1.0965065178451106]
We extend the analysis of optimal mutation rates to two variants of the OneMax problem.
We compute for all population sizes $lambda in 2i mid 0 le i le 18$ which mutation rates minimize the expected running time.
Our results do not only provide a lower bound against which we can measure common evolutionary approaches.
arXiv Detail & Related papers (2020-06-20T01:23:14Z) - A Rigorous Runtime Analysis of the $(1 + (\lambda, \lambda))$ GA on Jump
Functions [8.34061303235504]
We conduct the first runtime analysis of this algorithm on a multimodal problem class, the jump functions benchmark.
For the isolated problem of leaving the local optimum of jump functions, we determine provably optimal parameters that lead to a runtime of $(n/k)k/2 eTheta(k)$.
arXiv Detail & Related papers (2020-04-14T17:54:12Z) - A new regret analysis for Adam-type algorithms [78.825194932103]
In theory, regret guarantees for online convex optimization require a rapidly decaying $beta_1to0$ schedule.
We propose a novel framework that allows us to derive optimal, data-dependent regret bounds with a constant $beta_1$.
arXiv Detail & Related papers (2020-03-21T19:19:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.