Improved Runtime Guarantees for the SPEA2 Multi-Objective Optimizer
- URL: http://arxiv.org/abs/2511.07150v1
- Date: Mon, 10 Nov 2025 14:43:02 GMT
- Title: Improved Runtime Guarantees for the SPEA2 Multi-Objective Optimizer
- Authors: Benjamin Doerr, Martin S. Krejca, Milan Stanković,
- Abstract summary: We show that the SPEA2 has very different population dynamics than the NSGA-II.<n>We show that the best runtime guarantee of $O(nk)$ is not only achieved for $mu = Theta+1n)$ and $lambda = O(nk)$ but for arbitrary $mu, lambda = O(nk)$.
- Score: 21.23874800091344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Together with the NSGA-II, the SPEA2 is one of the most widely used domination-based multi-objective evolutionary algorithms. For both algorithms, the known runtime guarantees are linear in the population size; for the NSGA-II, matching lower bounds exist. With a careful study of the more complex selection mechanism of the SPEA2, we show that it has very different population dynamics. From these, we prove runtime guarantees for the OneMinMax, LeadingOnesTrailingZeros, and OneJumpZeroJump benchmarks that depend less on the population size. For example, we show that the SPEA2 with parent population size $\mu \ge n - 2k + 3$ and offspring population size $\lambda$ computes the Pareto front of the OneJumpZeroJump benchmark with gap size $k$ in an expected number of $O( (\lambda+\mu)n + n^{k+1})$ function evaluations. This shows that the best runtime guarantee of $O(n^{k+1})$ is not only achieved for $\mu = \Theta(n)$ and $\lambda = O(n)$ but for arbitrary $\mu, \lambda = O(n^k)$. Thus, choosing suitable parameters -- a key challenge in using heuristic algorithms -- is much easier for the SPEA2 than the NSGA-II.
Related papers
- Towards a Rigorous Understanding of the Population Dynamics of the NSGA-III: Tight Runtime Bounds [5.482532589225553]
We prove tight runtime bounds for NSGA-III on the bi-objective OneMinMax ($2$-OMM) problem.<n>This is the first proven lower runtime bound for NSGA-III on a classical benchmark problem.<n>We also improve the best known upper bound of NSGA-III on the $m$-objective OneMinMax problem.
arXiv Detail & Related papers (2025-11-10T14:11:07Z) - Speeding Up the NSGA-II via Dynamic Population Sizes [24.440172242035228]
We propose the dynamic NSGA-II (dNSGA-II), which is based on the popular NSGA-II and features a non-static population size.<n>We prove that the dNSGA-II with parameters $mu geq 4(n + 1)$ and $tau geq frac25650 e n$ computes the full front of the scOneMinMax benchmark of size.<n>For an optimal choice of $mu$ and $tau$, the resulting $O(n log(n
arXiv Detail & Related papers (2025-09-01T19:45:17Z) - A First Runtime Analysis of NSGA-III on a Many-Objective Multimodal Problem: Provable Exponential Speedup via Stochastic Population Update [1.223779595809275]
NSGA-III is a prominent algorithm in evolutionary many-objective optimization.<n>This paper conducts a rigorous runtime analysis of NSGA-III on the many-objective $OJZJfull$ benchmark.<n>We show that NSGA-III is faster than NSGA-II by a factor of $mu/nd/2$ for some $mu in omega(nd/2ln)$.
arXiv Detail & Related papers (2025-05-02T13:26:05Z) - Speeding Up the NSGA-II With a Simple Tie-Breaking Rule [9.044970217182117]
We analyze a simple tie-breaking rule in the selection of the next population.<n>We prove the effectiveness of our benchmarks on the classic OneMinMax, LeadingOnesZero, and OneJumpJump benchmarks.<n>For the bi-objective problems, we show runtime guarantees that do not increase when moderately increasing the population size over the minimum size.
arXiv Detail & Related papers (2024-12-16T16:15:37Z) - Provable Scaling Laws for the Test-Time Compute of Large Language Models [84.00141420901038]
We propose two algorithms that enjoy provable scaling laws for the test-time compute of large language models.<n>One is a two-stage knockout-style algorithm, where each candidate is evaluated by its average win rate against multiple opponents.<n>The other is a two-stage league-style algorithm, where each candidate is evaluated by its average win rate against multiple opponents.
arXiv Detail & Related papers (2024-11-29T05:29:47Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Quantum Goemans-Williamson Algorithm with the Hadamard Test and
Approximate Amplitude Constraints [62.72309460291971]
We introduce a variational quantum algorithm for Goemans-Williamson algorithm that uses only $n+1$ qubits.
Efficient optimization is achieved by encoding the objective matrix as a properly parameterized unitary conditioned on an auxilary qubit.
We demonstrate the effectiveness of our protocol by devising an efficient quantum implementation of the Goemans-Williamson algorithm for various NP-hard problems.
arXiv Detail & Related papers (2022-06-30T03:15:23Z) - Sharper Convergence Guarantees for Asynchronous SGD for Distributed and
Federated Learning [77.22019100456595]
We show a training algorithm for distributed computation workers with varying communication frequency.
In this work, we obtain a tighter convergence rate of $mathcalO!!!(sigma2-2_avg!! .
We also show that the heterogeneity term in rate is affected by the average delay within each worker.
arXiv Detail & Related papers (2022-06-16T17:10:57Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Second-Order Information in Non-Convex Stochastic Optimization: Power
and Limitations [54.42518331209581]
We find an algorithm which finds.
epsilon$-approximate stationary point (with $|nabla F(x)|le epsilon$) using.
$(epsilon,gamma)$surimate random random points.
Our lower bounds here are novel even in the noiseless case.
arXiv Detail & Related papers (2020-06-24T04:41:43Z) - A Rigorous Runtime Analysis of the $(1 + (\lambda, \lambda))$ GA on Jump
Functions [8.34061303235504]
We conduct the first runtime analysis of this algorithm on a multimodal problem class, the jump functions benchmark.
For the isolated problem of leaving the local optimum of jump functions, we determine provably optimal parameters that lead to a runtime of $(n/k)k/2 eTheta(k)$.
arXiv Detail & Related papers (2020-04-14T17:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.