Hardest Monotone Functions for Evolutionary Algorithms
- URL: http://arxiv.org/abs/2311.07438v2
- Date: Tue, 01 Jul 2025 14:22:27 GMT
- Title: Hardest Monotone Functions for Evolutionary Algorithms
- Authors: Marc Kaufmann, Maxime Larcher, Johannes Lengler, Oliver Sieberling,
- Abstract summary: We revisit the question how hard it can be for the $(1+1)$ Evolutionary algorithm to optimize monotone pseudo-Boolean functions.<n>We show that our construction provides the first explicit example which realizes the pessimism of the poea model.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper we revisit the question how hard it can be for the $(1+1)$ Evolutionary Algorithm to optimize monotone pseudo-Boolean functions. By introducing a more pessimistic stochastic process, the partially-ordered evolutionary algorithm (PO-EA) model, Jansen first proved a runtime bound of $O(n^{3/2})$. More recently, Lengler, Martinsson and Steger improved this upper bound to $O(n \log^2 n)$ by an entropy compression argument. In this work, we analyze monotone functions that may adversarially vary at each step of the optimization, so-called dynamic monotone functions. We introduce the function Switching Dynamic BinVal (SDBV) and prove, using a combinatorial argument, that for the $(1+1)$-EA with any mutation rate $p \in [0,1]$, SDBV is drift minimizing within the class of dynamic monotone functions. We further show that the $(1+1)$-EA optimizes SDBV in $\Theta(n^{3/2})$ generations. Therefore, our construction provides the first explicit example which realizes the pessimism of the \poea model. Our simulations demonstrate matching runtimes for both the static and the self-adjusting $(1,\lambda)$-EA and $(1+\lambda)$-EA. Moreover, devising an example for fixed dimension, we illustrate that drift minimization does not equal maximal runtime beyond asymptotic analysis.
Related papers
- Proven Runtime Guarantees for How the MOEA/D Computes the Pareto Front From the Subproblem Solutions [9.044970217182117]
decomposition-based multi-objective evolutionary algorithm (MOEA/D) does not directly optimize a given multi-objective function $f$, but instead optimize $N + 1$ single-objective subproblems of $f$ in a co-evolutionary manner.
We analyze for the first time how the MOEA/D with only standard mutation operators computes the whole Pareto front of the OneMinMax benchmark.
Our overall bound for power-law suggests that the MOEA/D performs best for $N = O(nbeta - 1)$, resulting in an $O(n
arXiv Detail & Related papers (2024-05-02T05:32:19Z) - Kernelized Normalizing Constant Estimation: Bridging Bayesian Quadrature
and Bayesian Optimization [51.533164528799084]
We show that to estimate the normalizing constant within a small relative error, the level of difficulty depends on the value of $lambda$.
We find that this pattern holds true even when the function evaluations are noisy.
arXiv Detail & Related papers (2024-01-11T07:45:09Z) - Statistical Learning under Heterogeneous Distribution Shift [71.8393170225794]
Ground-truth predictor is additive $mathbbE[mathbfz mid mathbfx,mathbfy] = f_star(mathbfx) +g_star(mathbfy)$.
arXiv Detail & Related papers (2023-02-27T16:34:21Z) - Near-Optimal Non-Convex Stochastic Optimization under Generalized
Smoothness [21.865728815935665]
Two recent works established the $O(epsilon-3)$ sample complexity to obtain an $O(epsilon)$-stationary point.
However, both require a large batch size on the order of $mathrmploy(epsilon-1)$, which is not only computationally burdensome but also unsuitable for streaming applications.
In this work, we solve the prior two problems simultaneously by revisiting a simple variant of the STORM algorithm.
arXiv Detail & Related papers (2023-02-13T00:22:28Z) - Improved Dynamic Regret for Online Frank-Wolfe [54.690867216880356]
We investigate the dynamic regret of online Frank-Wolfe (OFW), which is an efficient projection-free algorithm for online convex optimization.
In this paper, we derive improved dynamic regret bounds for OFW by extending the fast convergence rates of FW from offline optimization to online optimization.
arXiv Detail & Related papers (2023-02-11T07:19:51Z) - Optimal Query Complexities for Dynamic Trace Estimation [59.032228008383484]
We consider the problem of minimizing the number of matrix-vector queries needed for accurate trace estimation in the dynamic setting where our underlying matrix is changing slowly.
We provide a novel binary tree summation procedure that simultaneously estimates all $m$ traces up to $epsilon$ error with $delta$ failure probability.
Our lower bounds (1) give the first tight bounds for Hutchinson's estimator in the matrix-vector product model with Frobenius norm error even in the static setting, and (2) are the first unconditional lower bounds for dynamic trace estimation.
arXiv Detail & Related papers (2022-09-30T04:15:44Z) - Multi-block-Single-probe Variance Reduced Estimator for Coupled
Compositional Optimization [49.58290066287418]
We propose a novel method named Multi-block-probe Variance Reduced (MSVR) to alleviate the complexity of compositional problems.
Our results improve upon prior ones in several aspects, including the order of sample complexities and dependence on strongity.
arXiv Detail & Related papers (2022-07-18T12:03:26Z) - Hard Problems are Easier for Success-based Parameter Control [0.0]
Self-adjusting parameters in evolutionary algorithms can match or outperform the best fixed parameters on discrete problems.
We show that self-adjustment works as intended in the absence of easy slopes.
We discuss implications for the everywhere hard function LeadingOnes and a new class OneMaxBlocks of everywhere hard functions with tunable difficulty.
arXiv Detail & Related papers (2022-04-12T13:56:29Z) - Self-adjusting Population Sizes for the $(1, \lambda)$-EA on Monotone
Functions [7.111443975103329]
We study the $(1,lambda)$-EA with mutation rate $c/n$ for $cle 1$, where the population size is adaptively controlled with the $(1:s+1)$-success rule.
We show that this setup with $c=1$ is efficient on onemax for $s1$, but inefficient if $s ge 18$.
arXiv Detail & Related papers (2022-04-01T15:46:12Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Runtime analysis of the (mu+1)-EA on the Dynamic BinVal function [0.0]
We study evolutionary algorithms in a dynamic setting, where for each generation a different fitness function is chosen.
In particular, the hardest region for optimization is NOT around the optimum.
arXiv Detail & Related papers (2020-10-26T08:55:53Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Optimal Mutation Rates for the $(1+\lambda)$ EA on OneMax [1.0965065178451106]
We extend the analysis of optimal mutation rates to two variants of the OneMax problem.
We compute for all population sizes $lambda in 2i mid 0 le i le 18$ which mutation rates minimize the expected running time.
Our results do not only provide a lower bound against which we can measure common evolutionary approaches.
arXiv Detail & Related papers (2020-06-20T01:23:14Z) - A Simple Convergence Proof of Adam and Adagrad [74.24716715922759]
We show a proof of convergence between the Adam Adagrad and $O(d(N)/st)$ algorithms.
Adam converges with the same convergence $O(d(N)/st)$ when used with the default parameters.
arXiv Detail & Related papers (2020-03-05T01:56:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.