Near-Tight Runtime Guarantees for Many-Objective Evolutionary Algorithms
- URL: http://arxiv.org/abs/2404.12746v4
- Date: Mon, 23 Jun 2025 09:42:35 GMT
- Title: Near-Tight Runtime Guarantees for Many-Objective Evolutionary Algorithms
- Authors: Simon Wietheger, Benjamin Doerr,
- Abstract summary: We consider a large class of MOEAs including the (global) SEMO, SMS-EMOA, balanced NSGA-II, NSGA-III, and SPEA2.<n>Most of our bounds are tight apart from small factors in the number of objectives and length of bitstrings.<n>This is the first time that such tight bounds are proven for many-objective uses of MOEAs.
- Score: 10.165640083594573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite significant progress in the field of mathematical runtime analysis of multi-objective evolutionary algorithms (MOEAs), the performance of MOEAs on discrete many-objective problems is little understood. In particular, the few existing performance guarantees for classic MOEAs on classic benchmarks are all roughly quadratic in the size of the Pareto front. In this work, we consider a large class of MOEAs including the (global) SEMO, SMS-EMOA, balanced NSGA-II, NSGA-III, and SPEA2. For these, we prove near-tight runtime guarantees for the four most common benchmark problems OneMinMax, CountingOnesCountingZeros, LeadingOnesTrailingZeros, and OneJumpZeroJump, and this for arbitrary numbers of objectives. Most of our bounds depend only linearly on the size of the largest incomparable set, showing that MOEAs on these benchmarks cope much better with many objectives than what previous works suggested. Most of our bounds are tight apart from small polynomial factors in the number of objectives and length of bitstrings. This is the first time that such tight bounds are proven for many-objective uses of MOEAs. For the runtime of the SEMO on the LOTZ benchmark in $m \ge 6$ objectives, our runtime guarantees are even smaller than the size of the largest incomparable set. This is again the first time that such runtime guarantees are proven.
Related papers
- Tight Runtime Guarantees From Understanding the Population Dynamics of the GSEMO Multi-Objective Evolutionary Algorithm [9.966672345291707]
We study the dynamics of the global simple evolutionary multi-objective (GSEMO) algorithm.<n>We prove a lower bound of order $Omega(n2 log n)$ for the classic CountingOnesCountingZeros (COCZ) benchmark.<n>Our methods extend to other classic benchmarks and yield, e.g., the first $Omega(nk+1)$ lower bound for the OJZJ benchmark.
arXiv Detail & Related papers (2025-05-02T13:40:25Z) - Do Large Language Model Benchmarks Test Reliability? [66.1783478365998]
We investigate how well current benchmarks quantify model reliability.<n>Motivated by this gap in the evaluation of reliability, we propose the concept of so-called platinum benchmarks.<n>We evaluate a wide range of models on these platinum benchmarks and find that, indeed, frontier LLMs still exhibit failures on simple tasks.
arXiv Detail & Related papers (2025-02-05T18:58:19Z) - Speeding Up the NSGA-II With a Simple Tie-Breaking Rule [9.044970217182117]
We analyze a simple tie-breaking rule in the selection of the next population.
We prove the effectiveness of our benchmarks on the classic OneMinMax, LeadingOnesZero, and OneJumpJump benchmarks.
For the bi-objective problems, we show runtime guarantees that do not increase when moderately increasing the population size over the minimum size.
arXiv Detail & Related papers (2024-12-16T16:15:37Z) - Difficulties of the NSGA-II with the Many-Objective LeadingOnes Problem [9.044970217182117]
NSGA-II is the most prominent multi-objective evolutionary algorithm.
We show that the NSGA-II cannot compute the Pareto front in less than exponential time.
arXiv Detail & Related papers (2024-11-15T07:50:40Z) - Runtime Analyses of NSGA-III on Many-Objective Problems [10.955844285189372]
We present the first runtime analyses of NSGA-III on the popular many-objective benchmark problems mLOTZ, mOMM, and mCOCZ.
We show how these parameters should be scaled with the problem dimension, the number of objectives and the fitness range.
To our knowledge, these are the first runtime analyses for NSGA-III for more than 3 objectives.
arXiv Detail & Related papers (2024-04-17T14:39:14Z) - RMP-SAM: Towards Real-Time Multi-Purpose Segment Anything [117.02741621686677]
This work explores a novel real-time segmentation setting called real-time multi-purpose segmentation.<n>It contains three fundamental sub-tasks: interactive segmentation, panoptic segmentation, and video instance segmentation.<n>We present a novel dynamic convolution-based method, Real-Time Multi-Purpose SAM (RMP-SAM)<n>It contains an efficient encoder and an efficient decoupled adapter to perform prompt-driven decoding.
arXiv Detail & Related papers (2024-01-18T18:59:30Z) - Runtime Analysis of the SMS-EMOA for Many-Objective Optimization [14.309243378538014]
This paper conducts the first rigorous analysis of the SMS-EMOA for many-objective optimization.<n>We first propose a many-objective counterpart of the bi-objective OJZJ benchmark.<n>We show that the SMS-EMOA has a performance comparable to the GSEMO and the NSGA-II.
arXiv Detail & Related papers (2023-12-16T02:23:09Z) - SqueezeLLM: Dense-and-Sparse Quantization [80.32162537942138]
Main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, for single batch inference.
We introduce SqueezeLLM, a post-training quantization framework that enables lossless compression to ultra-low precisions of up to 3-bit.
Our framework incorporates two novel ideas: (i) sensitivity-based non-uniform quantization, which searches for the optimal bit precision assignment based on second-order information; and (ii) the Dense-and-Sparse decomposition that stores outliers and sensitive weight values in an efficient sparse format.
arXiv Detail & Related papers (2023-06-13T08:57:54Z) - Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together! [100.19080749267316]
"Sparsity May Cry" Benchmark (SMC-Bench) is a collection of carefully-curated 4 diverse tasks with 10 datasets.
SMC-Bench is designed to favor and encourage the development of more scalable and generalizable sparse algorithms.
arXiv Detail & Related papers (2023-03-03T18:47:21Z) - Beyond SOT: Tracking Multiple Generic Objects at Once [141.36900362724975]
Generic Object Tracking (GOT) is the problem of tracking target objects, specified by bounding boxes in the first frame of a video.
We introduce a new large-scale GOT benchmark, LaGOT, containing multiple annotated target objects per sequence.
Our approach achieves highly competitive results on single-object GOT datasets, setting a new state of the art on TrackingNet with a success rate AUC of 84.4%.
arXiv Detail & Related papers (2022-12-22T17:59:19Z) - Runtime Analysis for the NSGA-II: Proving, Quantifying, and Explaining
the Inefficiency For Many Objectives [16.904475483445452]
The NSGA-II is one of the most prominent algorithms to solve multi-objective optimization problems.
We show that the NSGA-II is less effective for larger numbers of objectives.
arXiv Detail & Related papers (2022-11-23T16:15:26Z) - A Mathematical Runtime Analysis of the Non-dominated Sorting Genetic
Algorithm III (NSGA-III) [9.853329403413701]
The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is the most prominent multi-objective evolutionary algorithm for real-world applications.
We provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives.
arXiv Detail & Related papers (2022-11-15T15:10:36Z) - Multi-Target XGBoostLSS Regression [91.3755431537592]
We present an extension of XGBoostLSS that models multiple targets and their dependencies in a probabilistic regression setting.
Our approach outperforms existing GBMs with respect to runtime and compares well in terms of accuracy.
arXiv Detail & Related papers (2022-10-13T08:26:14Z) - BURST: A Benchmark for Unifying Object Recognition, Segmentation and
Tracking in Video [58.71785546245467]
Multiple existing benchmarks involve tracking and segmenting objects in video.
There is little interaction between them due to the use of disparate benchmark datasets and metrics.
We propose BURST, a dataset which contains thousands of diverse videos with high-quality object masks.
All tasks are evaluated using the same data and comparable metrics, which enables researchers to consider them in unison.
arXiv Detail & Related papers (2022-09-25T01:27:35Z) - Quantum Goemans-Williamson Algorithm with the Hadamard Test and
Approximate Amplitude Constraints [62.72309460291971]
We introduce a variational quantum algorithm for Goemans-Williamson algorithm that uses only $n+1$ qubits.
Efficient optimization is achieved by encoding the objective matrix as a properly parameterized unitary conditioned on an auxilary qubit.
We demonstrate the effectiveness of our protocol by devising an efficient quantum implementation of the Goemans-Williamson algorithm for various NP-hard problems.
arXiv Detail & Related papers (2022-06-30T03:15:23Z) - Mathematical Runtime Analysis for the Non-Dominated Sorting Genetic
Algorithm II (NSGA-II) [16.904475483445452]
We show that runtime analyses are feasible also for the NSGA-II.
We prove that with a population size four times larger than the size of the Pareto front, the NSGA-II satisfies the same runtime guarantees as the SEMO and GSEMO algorithms.
arXiv Detail & Related papers (2021-12-16T03:00:20Z) - PFENet++: Boosting Few-shot Semantic Segmentation with the
Noise-filtered Context-aware Prior Mask [62.37727055343632]
We revisit the prior mask guidance proposed in Guided Feature Enrichment Network for Few-Shot''
We propose the Context-aware Prior Mask (CAPM) that leverages additional nearby semantic cues for better locating the objects in query images.
We take one step further by incorporating a lightweight Noise Suppression Module (NSM) to screen out the unnecessary responses.
arXiv Detail & Related papers (2021-09-28T15:07:43Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.