Speeding Up the NSGA-II via Dynamic Population Sizes
- URL: http://arxiv.org/abs/2509.01739v1
- Date: Mon, 01 Sep 2025 19:45:17 GMT
- Title: Speeding Up the NSGA-II via Dynamic Population Sizes
- Authors: Benjamin Doerr, Martin S. Krejca, Simon Wietheger,
- Abstract summary: We propose the dynamic NSGA-II (dNSGA-II), which is based on the popular NSGA-II and features a non-static population size.<n>We prove that the dNSGA-II with parameters $mu geq 4(n + 1)$ and $tau geq frac25650 e n$ computes the full front of the scOneMinMax benchmark of size.<n>For an optimal choice of $mu$ and $tau$, the resulting $O(n log(n
- Score: 24.440172242035228
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-objective evolutionary algorithms (MOEAs) are among the most widely and successfully applied optimizers for multi-objective problems. However, to store many optimal trade-offs (the Pareto optima) at once, MOEAs are typically run with a large, static population of solution candidates, which can slow down the algorithm. We propose the dynamic NSGA-II (dNSGA-II), which is based on the popular NSGA-II and features a non-static population size. The dNSGA-II starts with a small initial population size of four and doubles it after a user-specified number $\tau$ of function evaluations, up to a maximum size of $\mu$. Via a mathematical runtime analysis, we prove that the dNSGA-II with parameters $\mu \geq 4(n + 1)$ and $\tau \geq \frac{256}{50} e n$ computes the full Pareto front of the \textsc{OneMinMax} benchmark of size $n$ in $O(\log(\mu) \tau + \mu \log(n))$ function evaluations, both in expectation and with high probability. For an optimal choice of $\mu$ and $\tau$, the resulting $O(n \log(n))$ runtime improves the optimal expected runtime of the classic NSGA-II by a factor of $\Theta(n)$. In addition, we show that the parameter $\tau$ can be removed when utilizing concurrent runs of the dNSGA-II. This approach leads to a mild slow-down by a factor of $O(\log(n))$ compared to an optimal choice of $\tau$ for the dNSGA-II, which is still a speed-up of $\Theta(n / \log(n))$ over the classic NSGA-II.
Related papers
- Improved Runtime Guarantees for the SPEA2 Multi-Objective Optimizer [21.23874800091344]
We show that the SPEA2 has very different population dynamics than the NSGA-II.<n>We show that the best runtime guarantee of $O(nk)$ is not only achieved for $mu = Theta+1n)$ and $lambda = O(nk)$ but for arbitrary $mu, lambda = O(nk)$.
arXiv Detail & Related papers (2025-11-10T14:43:02Z) - Towards a Rigorous Understanding of the Population Dynamics of the NSGA-III: Tight Runtime Bounds [5.482532589225553]
We prove tight runtime bounds for NSGA-III on the bi-objective OneMinMax ($2$-OMM) problem.<n>This is the first proven lower runtime bound for NSGA-III on a classical benchmark problem.<n>We also improve the best known upper bound of NSGA-III on the $m$-objective OneMinMax problem.
arXiv Detail & Related papers (2025-11-10T14:11:07Z) - A First Runtime Analysis of NSGA-III on a Many-Objective Multimodal Problem: Provable Exponential Speedup via Stochastic Population Update [1.223779595809275]
NSGA-III is a prominent algorithm in evolutionary many-objective optimization.<n>This paper conducts a rigorous runtime analysis of NSGA-III on the many-objective $OJZJfull$ benchmark.<n>We show that NSGA-III is faster than NSGA-II by a factor of $mu/nd/2$ for some $mu in omega(nd/2ln)$.
arXiv Detail & Related papers (2025-05-02T13:26:05Z) - Speeding Up the NSGA-II With a Simple Tie-Breaking Rule [9.044970217182117]
We analyze a simple tie-breaking rule in the selection of the next population.<n>We prove the effectiveness of our benchmarks on the classic OneMinMax, LeadingOnesZero, and OneJumpJump benchmarks.<n>For the bi-objective problems, we show runtime guarantees that do not increase when moderately increasing the population size over the minimum size.
arXiv Detail & Related papers (2024-12-16T16:15:37Z) - Spacetime-Efficient Low-Depth Quantum State Preparation with
Applications [93.56766264306764]
We show that a novel deterministic method for preparing arbitrary quantum states requires fewer quantum resources than previous methods.
We highlight several applications where this ability would be useful, including quantum machine learning, Hamiltonian simulation, and solving linear systems of equations.
arXiv Detail & Related papers (2023-03-03T18:23:20Z) - Simplifying and Understanding State Space Models with Diagonal Linear
RNNs [56.33053691749856]
This work disposes of the discretization step, and proposes a model based on vanilla Diagonal Linear RNNs.
We empirically show that, despite being conceptually much simpler, $mathrmDLR$ is as performant as previously-proposed SSMs.
We also characterize the expressivity of SSMs and attention-based models via a suite of $13$ synthetic sequence-to-sequence tasks.
arXiv Detail & Related papers (2022-12-01T18:53:06Z) - Sharper Convergence Guarantees for Asynchronous SGD for Distributed and
Federated Learning [77.22019100456595]
We show a training algorithm for distributed computation workers with varying communication frequency.
In this work, we obtain a tighter convergence rate of $mathcalO!!!(sigma2-2_avg!! .
We also show that the heterogeneity term in rate is affected by the average delay within each worker.
arXiv Detail & Related papers (2022-06-16T17:10:57Z) - A First Runtime Analysis of the NSGA-II on a Multimodal Problem [17.049516028133958]
This work shows that the NSGA-II copes with the local optima of the OneJumpZeroJump problem at least as well as the global SEMO algorithm.
Overall, this work shows that the NSGA-II copes with the local optima of the OneJumpZeroJump problem at least as well as the global SEMO algorithm.
arXiv Detail & Related papers (2022-04-28T19:44:47Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - Private Stochastic Convex Optimization: Optimal Rates in $\ell_1$
Geometry [69.24618367447101]
Up to logarithmic factors the optimal excess population loss of any $(varepsilon,delta)$-differently private is $sqrtlog(d)/n + sqrtd/varepsilon n.$
We show that when the loss functions satisfy additional smoothness assumptions, the excess loss is upper bounded (up to logarithmic factors) by $sqrtlog(d)/n + (log(d)/varepsilon n)2/3.
arXiv Detail & Related papers (2021-03-02T06:53:44Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.