Multi-Modal Optimization with k-Cluster Big Bang-Big Crunch Algorithm
- URL: http://arxiv.org/abs/2401.06153v1
- Date: Thu, 21 Dec 2023 06:16:32 GMT
- Title: Multi-Modal Optimization with k-Cluster Big Bang-Big Crunch Algorithm
- Authors: Kemal Erdem Yenin, Reha Oguz Sayin, Kuzey Arar, Kadir Kaan Atalay, and
Fabio Stroppa
- Abstract summary: This paper introduces a multi-modal optimization version of the Big Bang-Big Crunch algorithm based on clustering, namely, k-BBBC.
This algorithm guarantees a complete convergence of the entire population, retrieving on average the 99% of local optima for a specific problem.
- Score: 0.7639610349097473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-modal optimization is often encountered in engineering problems,
especially when different and alternative solutions are sought. Evolutionary
algorithms can efficiently tackle multi-modal optimization thanks to their
features such as the concept of population, exploration/exploitation, and being
suitable for parallel computation.
This paper introduces a multi-modal optimization version of the Big Bang-Big
Crunch algorithm based on clustering, namely, k-BBBC. This algorithm guarantees
a complete convergence of the entire population, retrieving on average the 99\%
of local optima for a specific problem. Additionally, we introduce two
post-processing methods to (i) identify the local optima in a set of retrieved
solutions (i.e., a population), and (ii) quantify the number of correctly
retrieved optima against the expected ones (i.e., success rate).
Our results show that k-BBBC performs well even with problems having a large
number of optima (tested on 379 optima) and high dimensionality (tested on 32
decision variables). When compared to other multi-modal optimization methods,
it outperforms them in terms of accuracy (in both search and objective space)
and success rate (number of correctly retrieved optima) -- especially when
elitism is applied. Lastly, we validated our proposed post-processing methods
by comparing their success rate to the actual one. Results suggest that these
methods can be used to evaluate the performance of a multi-modal optimization
algorithm by correctly identifying optima and providing an indication of
success -- without the need to know where the optima are located in the search
space.
Related papers
- Greedy Restart Schedules: A Baseline for Dynamic Algorithm Selection on Numerical Black-box Optimization Problems [0.0]
We present a scheduling approach that iteratively selects the algorithm performing best on the distribution of unsolved training problems at time of selection, resulting in a problem-independent solver schedule.
We demonstrate our approach using well-knowns from numerical black-box optimization on the BBOB testbed, bridging much of the gap between single and virtual best solver from the original portfolio across various evaluation protocols.
arXiv Detail & Related papers (2025-04-15T17:54:21Z) - High-Dimensional Bayesian Optimization Using Both Random and Supervised Embeddings [0.6291443816903801]
This paper proposes a high-dimensionnal optimization method incorporating linear embedding subspaces of small dimension.
The resulting BO method combines in an adaptive way both random and supervised linear embeddings.
The obtained results show the high potential of EGORSE to solve high-dimensional blackbox optimization problems.
arXiv Detail & Related papers (2025-02-02T16:57:05Z) - Learning Multiple Initial Solutions to Optimization Problems [52.9380464408756]
Sequentially solving similar optimization problems under strict runtime constraints is essential for many applications.
We propose learning to predict emphmultiple diverse initial solutions given parameters that define the problem instance.
We find significant and consistent improvement with our method across all evaluation settings and demonstrate that it efficiently scales with the number of initial solutions required.
arXiv Detail & Related papers (2024-11-04T15:17:19Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Characterization of Constrained Continuous Multiobjective Optimization
Problems: A Performance Space Perspective [0.0]
Constrained multiobjective optimization problems (CMOPs) are unsatisfactorily understood.
The choice of adequate CMOPs for benchmarking is difficult and lacks a formal background.
This paper presents a novel performance assessment approach designed explicitly for constrained multiobjective optimization.
arXiv Detail & Related papers (2023-02-04T14:12:30Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Enhanced Opposition Differential Evolution Algorithm for Multimodal
Optimization [0.2538209532048866]
Most of the real-world problems are multimodal in nature that consists of multiple optimum values.
Classical gradient-based methods fail for optimization problems in which the objective functions are either discontinuous or non-differentiable.
We have proposed an algorithm known as Enhanced Opposition Differential Evolution (EODE) algorithm to solve the MMOPs.
arXiv Detail & Related papers (2022-08-23T16:18:27Z) - Uncertainty-Aware Search Framework for Multi-Objective Bayesian
Optimization [40.40632890861706]
We consider the problem of multi-objective (MO) blackbox optimization using expensive function evaluations.
We propose a novel uncertainty-aware search framework referred to as USeMO to efficiently select the sequence of inputs for evaluation.
arXiv Detail & Related papers (2022-04-12T16:50:48Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Revisiting Bayesian Optimization in the light of the COCO benchmark [1.4467794332678539]
This article reports a large investigation about the effects on the performance of (Gaussian process based) BO of common and less common design choices.
The code developed for this study makes the new version (v2.1.1) of the R package DiceOptim available on CRAN.
arXiv Detail & Related papers (2021-03-30T19:45:18Z) - Improving the Quantum Approximate Optimization Algorithm with
postselection [0.0]
Combinatorial optimization is among the main applications envisioned for near-term and fault-tolerant quantum computers.
We consider a well-studied quantum algorithm for optimization: the Quantum Approximate Optimization Algorithm (QAOA) applied to the MaxCut problem on 3-regular graphs.
We derive theoretical upper and lower bounds showing that a constant (though small) increase of the fraction of satisfied edges is indeed achievable.
arXiv Detail & Related papers (2020-11-10T22:17:50Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.