Tight Runtime Guarantees From Understanding the Population Dynamics of the GSEMO Multi-Objective Evolutionary Algorithm
- URL: http://arxiv.org/abs/2505.01266v1
- Date: Fri, 02 May 2025 13:40:25 GMT
- Title: Tight Runtime Guarantees From Understanding the Population Dynamics of the GSEMO Multi-Objective Evolutionary Algorithm
- Authors: Benjamin Doerr, Martin Krejca, Andre Opris,
- Abstract summary: We study the dynamics of the global simple evolutionary multi-objective (GSEMO) algorithm.<n>We prove a lower bound of order $Omega(n2 log n)$ for the classic CountingOnesCountingZeros (COCZ) benchmark.<n>Our methods extend to other classic benchmarks and yield, e.g., the first $Omega(nk+1)$ lower bound for the OJZJ benchmark.
- Score: 9.966672345291707
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The global simple evolutionary multi-objective optimizer (GSEMO) is a simple, yet often effective multi-objective evolutionary algorithm (MOEA). By only maintaining non-dominated solutions, it has a variable population size that automatically adjusts to the needs of the optimization process. The downside of the dynamic population size is that the population dynamics of this algorithm are harder to understand, resulting, e.g., in the fact that only sporadic tight runtime analyses exist. In this work, we significantly enhance our understanding of the dynamics of the GSEMO, in particular, for the classic CountingOnesCountingZeros (COCZ) benchmark. From this, we prove a lower bound of order $\Omega(n^2 \log n)$, for the first time matching the seminal upper bounds known for over twenty years. We also show that the GSEMO finds any constant fraction of the Pareto front in time $O(n^2)$, improving over the previous estimate of $O(n^2 \log n)$ for the time to find the first Pareto optimum. Our methods extend to other classic benchmarks and yield, e.g., the first $\Omega(n^{k+1})$ lower bound for the OJZJ benchmark in the case that the gap parameter is $k \in \{2,3\}$. We are therefore optimistic that our new methods will be useful in future mathematical analyses of MOEAs.
Related papers
- Illustrating the Efficiency of Popular Evolutionary Multi-Objective Algorithms Using Runtime Analysis [3.748255320979002]
We show that popular evolutionary multi-objective (EMO) algorithms have the same performance guarantee as the simple (G)SEMO algorithm.
We prove that, while GSEMO requires at least $nn$ expected fitness evaluations to optimise OneTrapZeroTrap, popular EMO algorithms NSGA-II, NSGA-III and SMS-EMOA, all enhanced with a mild diversity mechanism of avoiding genotype duplication, only require $O(n log n)$ expected fitness evaluations.
arXiv Detail & Related papers (2024-05-22T12:09:00Z) - Already Moderate Population Sizes Provably Yield Strong Robustness to Noise [53.27802701790209]
We show that two evolutionary algorithms can tolerate constant noise probabilities without increasing the runtime on the OneMax benchmark.
Our results are based on a novel proof argument that the noiseless offspring can be seen as a biased uniform crossover between the parent and the noisy offspring.
arXiv Detail & Related papers (2024-04-02T16:35:52Z) - Runtime Analysis of the SMS-EMOA for Many-Objective Optimization [14.309243378538014]
This paper conducts the first rigorous analysis of the SMS-EMOA for many-objective optimization.<n>We first propose a many-objective counterpart of the bi-objective OJZJ benchmark.<n>We show that the SMS-EMOA has a performance comparable to the GSEMO and the NSGA-II.
arXiv Detail & Related papers (2023-12-16T02:23:09Z) - Runtime Analyses of Multi-Objective Evolutionary Algorithms in the
Presence of Noise [7.421135890148154]
We conduct the first analysis of a classic benchmark in the presence of noise in the objective functions.
We prove that when bit-wise prior noise with rate $p le alpha/n$, $alpha$ a suitable constant is present.
We prove that when all solutions are reevaluated in each iteration, then any noise rate $p = omega(log(n)/n2)$ leads to a super-polynomial runtime.
arXiv Detail & Related papers (2023-05-17T14:48:06Z) - The $(1+(\lambda,\lambda))$ Global SEMO Algorithm [8.34061303235504]
We show that the $(lambda,lambda))$ genetic computation can be transported to multi-objective evolutionary computation.
We define the $(lambda,lambda))$ global SEMO algorithm, a variant of the classic global SEMO algorithm, and prove that it optimize the OneMinMax algorithm benchmarkally faster than the global SEMO.
arXiv Detail & Related papers (2022-10-07T15:18:32Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Correcting Momentum with Second-order Information [50.992629498861724]
We develop a new algorithm for non-critical optimization that finds an $O(epsilon)$epsilon point in the optimal product.
We validate our results on a variety of large-scale deep learning benchmarks and architectures.
arXiv Detail & Related papers (2021-03-04T19:01:20Z) - GEO: Enhancing Combinatorial Optimization with Classical and Quantum
Generative Models [62.997667081978825]
We introduce a new framework that leverages machine learning models known as generative models to solve optimization problems.
We focus on a quantum-inspired version of GEO relying on tensor-network Born machines.
We show its superior performance when the goal is to find the best minimum given a fixed budget for the number of function calls.
arXiv Detail & Related papers (2021-01-15T18:18:38Z) - Theoretical Analyses of Multiobjective Evolutionary Algorithms on Multimodal Objectives [14.309243378538014]
OJZJ problem is a bi-objective problem composed of two objectives isomorphic to the classic jump function benchmark.<n>We prove that SEMO with probability one does not compute the full Pareto front, regardless of the runtime.<n>We also show the tighter bound $frac 32 e nk+1 pm o(nk+1)$, which might be the first runtime bound for an MOEA that is tight apart from lower-order terms.
arXiv Detail & Related papers (2020-12-14T03:07:39Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z) - A new regret analysis for Adam-type algorithms [78.825194932103]
In theory, regret guarantees for online convex optimization require a rapidly decaying $beta_1to0$ schedule.
We propose a novel framework that allows us to derive optimal, data-dependent regret bounds with a constant $beta_1$.
arXiv Detail & Related papers (2020-03-21T19:19:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.