Runtime Analysis of Evolutionary Diversity Optimization on the Multi-objective (LeadingOnes, TrailingZeros) Problem
- URL: http://arxiv.org/abs/2404.11496v2
- Date: Fri, 19 Apr 2024 00:31:21 GMT
- Title: Runtime Analysis of Evolutionary Diversity Optimization on the Multi-objective (LeadingOnes, TrailingZeros) Problem
- Authors: Denis Antipov, Aneta Neumann, Frank Neumann, Andrew M. Sutton,
- Abstract summary: We prove evolutionary diversity optimization (EDO) on a 3-objective function LOTZ$_k$.
We show that the GSEMO computes a set of all Pareto-optimal solutions in $O(kn3)$ expected iterations.
We complement our theoretical analysis with an empirical study, which shows a very similar behavior for both diversity measures.
- Score: 10.697103866816958
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The diversity optimization is the class of optimization problems, in which we aim at finding a diverse set of good solutions. One of the frequently used approaches to solve such problems is to use evolutionary algorithms which evolve a desired diverse population. This approach is called evolutionary diversity optimization (EDO). In this paper, we analyse EDO on a 3-objective function LOTZ$_k$, which is a modification of the 2-objective benchmark function (LeadingOnes, TrailingZeros). We prove that the GSEMO computes a set of all Pareto-optimal solutions in $O(kn^3)$ expected iterations. We also analyze the runtime of the GSEMO$_D$ (a modification of the GSEMO for diversity optimization) until it finds a population with the best possible diversity for two different diversity measures, the total imbalance and the sorted imbalances vector. For the first measure we show that the GSEMO$_D$ optimizes it asymptotically faster than it finds a Pareto-optimal population, in $O(kn^2\log(n))$ expected iterations, and for the second measure we show an upper bound of $O(k^2n^3\log(n))$ expected iterations. We complement our theoretical analysis with an empirical study, which shows a very similar behavior for both diversity measures that is close to the theory predictions.
Related papers
- $f$-PO: Generalizing Preference Optimization with $f$-divergence Minimization [91.43730624072226]
$f$-PO is a novel framework that generalizes and extends existing approaches.
We conduct experiments on state-of-the-art language models using benchmark datasets.
arXiv Detail & Related papers (2024-10-29T02:11:45Z) - Rigorous Runtime Analysis of Diversity Optimization with GSEMO on
OneMinMax [13.026567958569965]
We study how the GSEMO algorithm with additional diversity-enhancing enhancements optimize a diversity of its population on a bi-objective benchmark problem OneMin.
We prove that it finds a population with optimal diversity in expected time $O(n2)$, when the problem size $n$ is odd.
For reaching our goal, we analyse the random walk of the population, which reflects the frequency of changes in the population and their outcomes.
arXiv Detail & Related papers (2023-07-14T09:43:29Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - An Analysis of Phenotypic Diversity in Multi-Solution Optimization [118.97353274202749]
We show that multiobjective optimization does not always produce much diversity, multimodal optimization produces higher fitness solutions, and quality diversity is not sensitive to genetic neutrality.
An autoencoder is used to discover phenotypic features automatically, producing an even more diverse solution set with quality diversity.
arXiv Detail & Related papers (2021-05-10T10:39:03Z) - Unified Convergence Analysis for Adaptive Optimization with Moving Average Estimator [75.05106948314956]
We show that an increasing large momentum parameter for the first-order moment is sufficient for adaptive scaling.
We also give insights for increasing the momentum in a stagewise manner in accordance with stagewise decreasing step size.
arXiv Detail & Related papers (2021-04-30T08:50:24Z) - Theoretical Analyses of Multiobjective Evolutionary Algorithms on
Multimodal Objectives [15.56430085052365]
OJZJ problem is a bi-objective problem composed of two objectives isomorphic to the classic jump function benchmark.
We prove that SEMO with probability one does not compute the full Pareto front, regardless of the runtime.
We also show the tighter bound $frac 32 e nk+1 pm o(nk+1)$, which might be the first runtime bound for an MOEA that is tight apart from lower-order terms.
arXiv Detail & Related papers (2020-12-14T03:07:39Z) - A Framework to Handle Multi-modal Multi-objective Optimization in
Decomposition-based Evolutionary Algorithms [7.81768535871051]
decomposition-based evolutionary algorithms have good performance for multi-objective optimization.
They are likely to perform poorly for multi-modal multi-objective optimization due to the lack of mechanisms to maintain the solution space diversity.
This paper proposes a framework to improve the performance of decomposition-based evolutionary algorithms for multi-modal multi-objective optimization.
arXiv Detail & Related papers (2020-09-30T14:32:57Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.