Multimodal Optimization with k-Cluster Big Bang-Big Crunch Algorithm and Postprocessing Methods for Identification and Quantification of Optima
- URL: http://arxiv.org/abs/2401.06153v2
- Date: Thu, 10 Oct 2024 16:16:15 GMT
- Title: Multimodal Optimization with k-Cluster Big Bang-Big Crunch Algorithm and Postprocessing Methods for Identification and Quantification of Optima
- Authors: Kemal Erdem Yenin, Reha Oguz Sayin, Kuzey Arar, Kadir Kaan Atalay, Fabio Stroppa,
- Abstract summary: Multimodal optimization is often encountered in engineering problems, especially when different and alternative solutions are sought.
This paper investigates whether a less-known, the Big Bang-Big Crunch (BBBC) algorithm, is suitable for multimodal optimization.
- Score: 0.7639610349097473
- License:
- Abstract: Multimodal optimization is often encountered in engineering problems, especially when different and alternative solutions are sought. Evolutionary algorithms can efficiently tackle multimodal optimization thanks to their features such as the concept of population, exploration/exploitation, and being suitable for parallel computation. This paper investigates whether a less-known optimizer, the Big Bang-Big Crunch (BBBC) algorithm, is suitable for multimodal optimization. We extended BBBC and propose k-BBBC, a clustering-based multi-modal optimizer. Additionally, we introduce two post-processing methods to (i) identify the local optima in a set of retrieved solutions (i.e., a population), and (ii) quantify the number of correctly retrieved optima against the expected ones (i.e., success rate). Our results show that k-BBBC performs well even with problems having a large number of optima (tested on $379$ optima) and high dimensionality (tested on $32$ decision variables), but it becomes computationally too expensive for problems with many local optima (i.e., in the CEC'2013 benchmark set). Compared to other multimodal optimization methods, it outperforms them in terms of accuracy (in both search and objective space) and success rate (number of correctly retrieved optima) when tested on basic multimodal functions, especially when elitism is applied; however, it requires knowing the number of optima of a problem, which makes its performance decrease when tested on niching competition test CEC'2013. Lastly, we validated our proposed post-processing methods by comparing their success rate to the actual one: results suggest that these methods can be used to evaluate the performance of a multimodal optimization algorithm by correctly identifying optima and providing an indication of success -- without the need to know where the optima are located in the search space.
Related papers
- Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Characterization of Constrained Continuous Multiobjective Optimization
Problems: A Performance Space Perspective [0.0]
Constrained multiobjective optimization problems (CMOPs) are unsatisfactorily understood.
The choice of adequate CMOPs for benchmarking is difficult and lacks a formal background.
This paper presents a novel performance assessment approach designed explicitly for constrained multiobjective optimization.
arXiv Detail & Related papers (2023-02-04T14:12:30Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - Uncertainty-Aware Search Framework for Multi-Objective Bayesian
Optimization [40.40632890861706]
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations.
We propose a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation.
arXiv Detail & Related papers (2022-04-12T16:50:48Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Revisiting Bayesian Optimization in the light of the COCO benchmark [1.4467794332678539]
This article reports a large investigation about the effects on the performance of (Gaussian process based) BO of common and less common design choices.
The code developed for this study makes the new version (v2.1.1) of the R package DiceOptim available on CRAN.
arXiv Detail & Related papers (2021-03-30T19:45:18Z) - Improving the Quantum Approximate Optimization Algorithm with
postselection [0.0]
Combinatorial optimization is among the main applications envisioned for near-term and fault-tolerant quantum computers.
We consider a well-studied quantum algorithm for optimization: the Quantum Approximate Optimization Algorithm (QAOA) applied to the MaxCut problem on 3-regular graphs.
We derive theoretical upper and lower bounds showing that a constant (though small) increase of the fraction of satisfied edges is indeed achievable.
arXiv Detail & Related papers (2020-11-10T22:17:50Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.