First Steps Towards a Runtime Analysis When Starting With a Good
Solution
- URL: http://arxiv.org/abs/2006.12161v3
- Date: Fri, 30 Jun 2023 17:29:55 GMT
- Title: First Steps Towards a Runtime Analysis When Starting With a Good
Solution
- Authors: Denis Antipov, Maxim Buzdalov, Benjamin Doerr
- Abstract summary: In practical applications it may be possible to guess solutions that are better than random ones.
We show that different algorithms profit to a very different degree from a better initial solution.
This could suggest that evolutionary algorithms better exploiting good initial solutions are still to be found.
- Score: 8.34061303235504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The mathematical runtime analysis of evolutionary algorithms traditionally
regards the time an algorithm needs to find a solution of a certain quality
when initialized with a random population. In practical applications it may be
possible to guess solutions that are better than random ones. We start a
mathematical runtime analysis for such situations. We observe that different
algorithms profit to a very different degree from a better initialization. We
also show that the optimal parameterization of the algorithm can depend
strongly on the quality of the initial solutions. To overcome this difficulty,
self-adjusting and randomized heavy-tailed parameter choices can be profitable.
Finally, we observe a larger gap between the performance of the best
evolutionary algorithm we found and the corresponding black-box complexity.
This could suggest that evolutionary algorithms better exploiting good initial
solutions are still to be found. These first findings stem from analyzing the
performance of the $(1+1)$ evolutionary algorithm and the static,
self-adjusting, and heavy-tailed $(1 + (\lambda,\lambda))$ GA on the OneMax
benchmark. We are optimistic that the question how to profit from good initial
solutions is interesting beyond these first examples.
Related papers
- Quality-Diversity Algorithms Can Provably Be Helpful for Optimization [24.694984679399315]
Quality-Diversity (QD) algorithms aim to find a set of high-performing, yet diverse solutions.
This paper tries to shed some light on the optimization ability of QD algorithms via rigorous running time analysis.
arXiv Detail & Related papers (2024-01-19T07:40:24Z) - Fast Re-Optimization of LeadingOnes with Frequent Changes [0.9281671380673306]
We show that the re-optimization approach suggested by Doerr et al. reaches a limit when the problem instances are prone to more frequent changes.
We propose a modification of their algorithm which interpolates between greedy search around the previous-best and the current-best solution.
arXiv Detail & Related papers (2022-09-09T16:51:41Z) - Choosing the Right Algorithm With Hints From Complexity Theory [16.33500498939925]
We show that the Metropolis algorithm is clearly the best of all algorithms regarded for reasonable problem sizes.
An artificial algorithm of this type having an $O(n log n)$ runtime leads to the result that the significance-based compact genetic algorithm (sig-cGA) can solve the DLB problem in time $O(n log n)$ with high probability.
arXiv Detail & Related papers (2021-09-14T11:12:32Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - Towards Time-Optimal Any-Angle Path Planning With Dynamic Obstacles [1.370633147306388]
Path finding is a well-studied problem in AI, which is often framed as graph search.
We present two algorithms, grounded in the same idea, that can obtain provably optimal solutions to the considered problem.
arXiv Detail & Related papers (2021-04-14T07:59:53Z) - Double Coverage with Machine-Learned Advice [100.23487145400833]
We study the fundamental online $k$-server problem in a learning-augmented setting.
We show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff.
arXiv Detail & Related papers (2021-03-02T11:04:33Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Analysis of Evolutionary Algorithms on Fitness Function with
Time-linkage Property [27.660240128423176]
In real-world applications, many optimization problems have the time-linkage property, that is, the objective function value relies on the current solution as well as the historical solutions.
This paper takes the first step to rigorously analyze evolutionary algorithms for time-linkage functions.
arXiv Detail & Related papers (2020-04-26T07:56:40Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.