Illustrating the Efficiency of Popular Evolutionary Multi-Objective Algorithms Using Runtime Analysis
- URL: http://arxiv.org/abs/2405.13572v1
- Date: Wed, 22 May 2024 12:09:00 GMT
- Title: Illustrating the Efficiency of Popular Evolutionary Multi-Objective Algorithms Using Runtime Analysis
- Authors: Duc-Cuong Dang, Andre Opris, Dirk Sudholt,
- Abstract summary: We show that popular evolutionary multi-objective (EMO) algorithms have the same performance guarantee as the simple (G)SEMO algorithm.
We prove that, while GSEMO requires at least $nn$ expected fitness evaluations to optimise OneTrapZeroTrap, popular EMO algorithms NSGA-II, NSGA-III and SMS-EMOA, all enhanced with a mild diversity mechanism of avoiding genotype duplication, only require $O(n log n)$ expected fitness evaluations.
- Score: 3.748255320979002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Runtime analysis has recently been applied to popular evolutionary multi-objective (EMO) algorithms like NSGA-II in order to establish a rigorous theoretical foundation. However, most analyses showed that these algorithms have the same performance guarantee as the simple (G)SEMO algorithm. To our knowledge, there are no runtime analyses showing an advantage of a popular EMO algorithm over the simple algorithm for deterministic problems. We propose such a problem and use it to showcase the superiority of popular EMO algorithms over (G)SEMO: OneTrapZeroTrap is a straightforward generalization of the well-known Trap function to two objectives. We prove that, while GSEMO requires at least $n^n$ expected fitness evaluations to optimise OneTrapZeroTrap, popular EMO algorithms NSGA-II, NSGA-III and SMS-EMOA, all enhanced with a mild diversity mechanism of avoiding genotype duplication, only require $O(n \log n)$ expected fitness evaluations. Our analysis reveals the importance of the key components in each of these sophisticated algorithms and contributes to a better understanding of their capabilities.
Related papers
- Towards Running Time Analysis of Interactive Multi-objective
Evolutionary Algorithms [23.815981112784552]
This paper provides the first running time analysis (the essential theoretical aspect of EAs) for practical iMOEAs.
We prove that the expected running time of the well-developed interactive NSGA-II for solving the OneMinMax and OneJumpZeroJump problems is $O(n log n)$ and $O(nk)$, respectively.
arXiv Detail & Related papers (2023-10-12T14:57:47Z) - Analysing the Robustness of NSGA-II under Noise [1.7205106391379026]
First runtime analyses of the famous and highly cited EMO algorithm NSGA-II have emerged.
It is unclear how and when NSGA-II can outperform GSEMO.
arXiv Detail & Related papers (2023-06-07T15:33:54Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - A Mathematical Runtime Analysis of the Non-dominated Sorting Genetic
Algorithm III (NSGA-III) [9.853329403413701]
The Non-dominated Sorting Genetic Algorithm II (NSGA-II) is the most prominent multi-objective evolutionary algorithm for real-world applications.
We provide the first mathematical runtime analysis of the NSGA-III, a refinement of the NSGA-II aimed at better handling more than two objectives.
arXiv Detail & Related papers (2022-11-15T15:10:36Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - A Simple Evolutionary Algorithm for Multi-modal Multi-objective
Optimization [0.0]
We introduce a steady-state evolutionary algorithm for solving multi-modal, multi-objective optimization problems (MMOPs)
We report its performance on 21 MMOPs from various test suites that are widely used for benchmarking using a low computational budget of 1000 function evaluations.
arXiv Detail & Related papers (2022-01-18T03:31:11Z) - Mathematical Runtime Analysis for the Non-Dominated Sorting Genetic
Algorithm II (NSGA-II) [16.904475483445452]
We show that runtime analyses are feasible also for the NSGA-II.
We prove that with a population size four times larger than the size of the Pareto front, the NSGA-II satisfies the same runtime guarantees as the SEMO and GSEMO algorithms.
arXiv Detail & Related papers (2021-12-16T03:00:20Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - GEO: Enhancing Combinatorial Optimization with Classical and Quantum
Generative Models [62.997667081978825]
We introduce a new framework that leverages machine learning models known as generative models to solve optimization problems.
We focus on a quantum-inspired version of GEO relying on tensor-network Born machines.
We show its superior performance when the goal is to find the best minimum given a fixed budget for the number of function calls.
arXiv Detail & Related papers (2021-01-15T18:18:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.