Theoretical Analyses of Multiobjective Evolutionary Algorithms on Multimodal Objectives
- URL: http://arxiv.org/abs/2012.07231v6
- Date: Sun, 09 Feb 2025 14:09:15 GMT
- Title: Theoretical Analyses of Multiobjective Evolutionary Algorithms on Multimodal Objectives
- Authors: Weijie Zheng, Benjamin Doerr,
- Abstract summary: OJZJ problem is a bi-objective problem composed of two objectives isomorphic to the classic jump function benchmark.
We prove that SEMO with probability one does not compute the full Pareto front, regardless of the runtime.
We also show the tighter bound $frac 32 e nk+1 pm o(nk+1)$, which might be the first runtime bound for an MOEA that is tight apart from lower-order terms.
- Score: 14.309243378538014
- License:
- Abstract: The theoretical understanding of MOEAs is lagging far behind their success in practice. In particular, previous theory work considers mostly easy problems that are composed of unimodal objectives. As a first step towards a deeper understanding of how evolutionary algorithms solve multimodal multiobjective problems, we propose the OJZJ problem, a bi-objective problem composed of two objectives isomorphic to the classic jump function benchmark. We prove that SEMO with probability one does not compute the full Pareto front, regardless of the runtime. In contrast, for all problem sizes $n$ and all jump sizes ${k \in [4..\frac n2 - 1]}$, the global SEMO (GSEMO) covers the Pareto front in an expected number of $\Theta((n-2k)n^{k})$ iterations. For $k = o(n)$, we also show the tighter bound $\frac 32 e n^{k+1} \pm o(n^{k+1})$, which might be the first runtime bound for an MOEA that is tight apart from lower-order terms. We also combine the GSEMO with two approaches that showed advantages in single-objective multimodal problems. When using the GSEMO with a heavy-tailed mutation operator, the expected runtime improves by a factor of at least $k^{\Omega(k)}$. When adapting the recent stagnation-detection strategy of Rajabi and Witt (2022) to the GSEMO, the expected runtime also improves by a factor of at least $k^{\Omega(k)}$ and surpasses the heavy-tailed GSEMO by a small polynomial factor in $k$. Via an experimental analysis, we show that these asymptotic differences are visible already for small problem sizes: A factor-$5$ speed-up from heavy-tailed mutation and a factor-$10$ speed-up from stagnation detection can be observed already for jump size~$4$ and problem sizes between $10$ and $50$. Overall, our results show that the ideas recently developed to aid single-objective evolutionary algorithms to cope with local optima can be effectively employed also in multiobjective optimization.
Related papers
- Theoretical Advantage of Multiobjective Evolutionary Algorithms for Problems with Different Degrees of Conflict [4.8951183832371]
OneMaxMin$_k$ benchmark class is a generalized variant of COCZ and OneMinMax.
Two typical non-MOEA approaches, scalarization (weighted-sum approach) and $epsilon$-constraint approach, are considered.
We prove that (G)SEMO, MOEA/D, NSGA-II, and SMS-EMOA can cover the full Pareto front in $O(maxk,1nln n)$ expected number of function evaluations.
arXiv Detail & Related papers (2024-08-08T04:09:52Z) - A First Running Time Analysis of the Strength Pareto Evolutionary Algorithm 2 (SPEA2) [22.123838452178585]
We present a running time analysis of strength evolutionary algorithm 2 (SPEA2) for the first time.
Specifically, we prove that the expected running time of SPEA2 for solving three commonly used multiobjective problems, i.e., $m$OneMinMax, $m$LeadingOnesZeroes, and $m$-OneZeroJump.
arXiv Detail & Related papers (2024-06-23T14:12:22Z) - Runtime Analysis of Evolutionary Diversity Optimization on the Multi-objective (LeadingOnes, TrailingZeros) Problem [10.697103866816958]
We prove evolutionary diversity optimization (EDO) on a 3-objective function LOTZ$_k$.
We show that the GSEMO computes a set of all Pareto-optimal solutions in $O(kn3)$ expected iterations.
We complement our theoretical analysis with an empirical study, which shows a very similar behavior for both diversity measures.
arXiv Detail & Related papers (2024-04-17T15:51:15Z) - Adaptive, Doubly Optimal No-Regret Learning in Strongly Monotone and Exp-Concave Games with Gradient Feedback [75.29048190099523]
Online gradient descent (OGD) is well known to be doubly optimal under strong convexity or monotonicity assumptions.
In this paper, we design a fully adaptive OGD algorithm, textsfAdaOGD, that does not require a priori knowledge of these parameters.
arXiv Detail & Related papers (2023-10-21T18:38:13Z) - An Oblivious Stochastic Composite Optimization Algorithm for Eigenvalue
Optimization Problems [76.2042837251496]
We introduce two oblivious mirror descent algorithms based on a complementary composite setting.
Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function.
We show how to extend our framework to scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.
arXiv Detail & Related papers (2023-06-30T08:34:29Z) - Global Nash Equilibrium in Non-convex Multi-player Game: Theory and
Algorithms [66.8634598612777]
We show that Nash equilibrium (NE) is acceptable to all players in a multi-player game.
We also show that no one can benefit unilaterally from the general theory step by step.
arXiv Detail & Related papers (2023-01-19T11:36:50Z) - The $(1+(\lambda,\lambda))$ Global SEMO Algorithm [8.34061303235504]
We show that the $(lambda,lambda))$ genetic computation can be transported to multi-objective evolutionary computation.
We define the $(lambda,lambda))$ global SEMO algorithm, a variant of the classic global SEMO algorithm, and prove that it optimize the OneMinMax algorithm benchmarkally faster than the global SEMO.
arXiv Detail & Related papers (2022-10-07T15:18:32Z) - A First Runtime Analysis of the NSGA-II on a Multimodal Problem [17.049516028133958]
This work shows that the NSGA-II copes with the local optima of the OneJumpZeroJump problem at least as well as the global SEMO algorithm.
Overall, this work shows that the NSGA-II copes with the local optima of the OneJumpZeroJump problem at least as well as the global SEMO algorithm.
arXiv Detail & Related papers (2022-04-28T19:44:47Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z) - FANOK: Knockoffs in Linear Time [73.5154025911318]
We describe a series of algorithms that efficiently implement Gaussian model-X knockoffs to control the false discovery rate on large scale feature selection problems.
We test our methods on problems with $p$ as large as $500,000$.
arXiv Detail & Related papers (2020-06-15T21:55:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.