Runtime Analysis of the SMS-EMOA for Many-Objective Optimization
- URL: http://arxiv.org/abs/2312.10290v3
- Date: Wed, 12 Jun 2024 12:55:13 GMT
- Title: Runtime Analysis of the SMS-EMOA for Many-Objective Optimization
- Authors: Weijie Zheng, Benjamin Doerr,
- Abstract summary: This paper conducts the first rigorous runtime analysis in many objectives for the SMSEMOA.
We show that SMS-EMOA has a performance comparable to the GSEMO and the NSGA-II.
- Score: 14.309243378538014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The classic NSGA-II was recently proven to have considerable difficulties in many-objective optimization. This paper conducts the first rigorous runtime analysis in many objectives for the SMS-EMOA, a steady-state NSGA-II that uses the hypervolume contribution instead of the crowding distance as the second selection criterion. To this aim, we first propose a many-objective counterpart, the m-objective mOJZJ, of the bi-objective OJZJ, which is the first many-objective multimodal benchmark for runtime analysis. We prove that SMS-EMOA computes the full Pareto front of this benchmark in an expected number of $O(\mu M n^k)$ iterations, where $n$ denotes the problem size (length of the bit-string representation), $k$ the gap size (a difficulty parameter of the problem), $M=(2n/m-2k+3)^{m/2}$ the size of the Pareto front, and $\mu$ the population size (at least the same size as the largest incomparable set). This result together with the existing negative result for the original NSGA-II shows that, in principle, the general approach of the NSGA-II is suitable for many-objective optimization, but the crowding distance as tie-breaker has deficiencies. We obtain three additional insights on the SMS-EMOA. Different from a recent result for the bi-objective OJZJ benchmark, a recently proposed stochastic population update often does not help for mOJZJ. It at most results in a speed-up by a factor of order $2^{k} / \mu$, which is $\Theta(1)$ for large $m$, such as $m>k$. On the positive side, we prove that heavy-tailed mutation irrespective of the number $m$ of objectives results in a speed-up of order $k^{0.5+k-\beta}/e^k$, the same advantage as previously shown for the bi-objective case. Finally, we conduct the first runtime analyses of the SMS-EMOA on the classic bi-objective OneMinMax and LOTZ benchmarks and show that the SMS-EMOA has a performance comparable to the GSEMO and the NSGA-II.
Related papers
- Convergence Rate Analysis of LION [54.28350823319057]
LION converges iterations of $cal(sqrtdK-)$ measured by gradient Karush-Kuhn-T (sqrtdK-)$.
We show that LION can achieve lower loss and higher performance compared to standard SGD.
arXiv Detail & Related papers (2024-11-12T11:30:53Z) - DI-MaskDINO: A Joint Object Detection and Instance Segmentation Model [67.56918651825056]
The performance of object detection lags behind that of instance segmentation (i.e., performance imbalance) when investigating the intermediate results from the beginning transformer decoder layer of MaskDINO.
This paper proposes DI-MaskDINO model, the core idea of which is to improve the final performance by alleviating the detection-segmentation imbalance.
DI-MaskDINO outperforms existing joint object detection and instance segmentation models on COCO and BDD100K benchmarks.
arXiv Detail & Related papers (2024-10-22T05:22:49Z) - A Crowding Distance That Provably Solves the Difficulties of the NSGA-II in Many-Objective Optimization [13.277972583315051]
Recent theoretical works have shown that the NSGA-II can have enormous difficulties to solve problems with more than two objectives.
We use the insights of these previous work to define a variant of the crowding distance, called truthful crowding distance.
We show that this algorithm can solve the many-objective versions of the OneMinMax, COCZ, LOTZ, and OJZJ$_k$ problems.
arXiv Detail & Related papers (2024-07-25T01:09:58Z) - Beyond SOT: Tracking Multiple Generic Objects at Once [141.36900362724975]
Generic Object Tracking (GOT) is the problem of tracking target objects, specified by bounding boxes in the first frame of a video.
We introduce a new large-scale GOT benchmark, LaGOT, containing multiple annotated target objects per sequence.
Our approach achieves highly competitive results on single-object GOT datasets, setting a new state of the art on TrackingNet with a success rate AUC of 84.4%.
arXiv Detail & Related papers (2022-12-22T17:59:19Z) - Runtime Analysis for the NSGA-II: Proving, Quantifying, and Explaining
the Inefficiency For Many Objectives [16.904475483445452]
The NSGA-II is one of the most prominent algorithms to solve multi-objective optimization problems.
We show that the NSGA-II is less effective for larger numbers of objectives.
arXiv Detail & Related papers (2022-11-23T16:15:26Z) - A Mathematical Runtime Analysis of the Non-dominated Sorting Genetic
Algorithm III (NSGA-III) [9.853329403413701]
The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is the most prominent multi-objective evolutionary algorithm for real-world applications.
We provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives.
arXiv Detail & Related papers (2022-11-15T15:10:36Z) - From Understanding the Population Dynamics of the NSGA-II to the First
Proven Lower Bounds [10.107150356618588]
We prove that the NSGA-II with suitable population size needs $Omega(Nnlog n)$ function evaluations.
For the computation of the crowding distance contributions of the two objectives, we even obtain a runtime estimate that is tight including the leading constant.
arXiv Detail & Related papers (2022-09-28T10:11:20Z) - A First Runtime Analysis of the NSGA-II on a Multimodal Problem [17.049516028133958]
This work shows that the NSGA-II copes with the local optima of the OneJumpZeroJump problem at least as well as the global SEMO algorithm.
Overall, this work shows that the NSGA-II copes with the local optima of the OneJumpZeroJump problem at least as well as the global SEMO algorithm.
arXiv Detail & Related papers (2022-04-28T19:44:47Z) - Score-based Generative Modeling in Latent Space [93.8985523558869]
Score-based generative models (SGMs) have recently demonstrated impressive results in terms of both sample quality and distribution coverage.
Here, we propose the Latent Score-based Generative Model (LSGM), a novel approach that trains SGMs in a latent space.
Moving from data to latent space allows us to train more expressive generative models, apply SGMs to non-continuous data, and learn smoother SGMs in a smaller space.
arXiv Detail & Related papers (2021-06-10T17:26:35Z) - Model-Based Multi-Agent RL in Zero-Sum Markov Games with Near-Optimal
Sample Complexity [67.02490430380415]
We show that model-based MARL achieves a sample complexity of $tilde O(|S||B|(gamma)-3epsilon-2)$ for finding the Nash equilibrium (NE) value up to some $epsilon$ error.
We also show that such a sample bound is minimax-optimal (up to logarithmic factors) if the algorithm is reward-agnostic, where the algorithm queries state transition samples without reward knowledge.
arXiv Detail & Related papers (2020-07-15T03:25:24Z) - Ranking a set of objects: a graph based least-square approach [70.7866286425868]
We consider the problem of ranking $N$ objects starting from a set of noisy pairwise comparisons provided by a crowd of equal workers.
We propose a class of non-adaptive ranking algorithms that rely on a least-squares intrinsic optimization criterion for the estimation of qualities.
arXiv Detail & Related papers (2020-02-26T16:19:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.