On the Performance of Metaheuristics: A Different Perspective
- URL: http://arxiv.org/abs/2001.08928v1
- Date: Fri, 24 Jan 2020 09:34:10 GMT
- Title: On the Performance of Metaheuristics: A Different Perspective
- Authors: Hamid Reza Boveiri and Raouf Khayami
- Abstract summary: We study some basic evolutionary and swam-intelligence metaheuristics i.e. Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), Teaching-Learning-Based Optimization (TLBO) and Cuckoo Optimization algorithm (COA)
A large number of experiments have been conducted on 20 different optimization benchmark functions with different characteristics, and the results reveal to us some fundamental conclusions besides the following ranking order among these metaheuristics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, we are immersed in tens of newly-proposed evolutionary and
swam-intelligence metaheuristics, which makes it very difficult to choose a
proper one to be applied on a specific optimization problem at hand. On the
other hand, most of these metaheuristics are nothing but slightly modified
variants of the basic metaheuristics. For example, Differential Evolution (DE)
or Shuffled Frog Leaping (SFL) are just Genetic Algorithms (GA) with a
specialized operator or an extra local search, respectively. Therefore, what
comes to the mind is whether the behavior of such newly-proposed metaheuristics
can be investigated on the basis of studying the specifications and
characteristics of their ancestors. In this paper, a comprehensive evaluation
study on some basic metaheuristics i.e. Genetic Algorithm (GA), Particle Swarm
Optimization (PSO), Artificial Bee Colony (ABC), Teaching-Learning-Based
Optimization (TLBO), and Cuckoo Optimization algorithm (COA) is conducted,
which give us a deeper insight into the performance of them so that we will be
able to better estimate the performance and applicability of all other
variations originated from them. A large number of experiments have been
conducted on 20 different combinatorial optimization benchmark functions with
different characteristics, and the results reveal to us some fundamental
conclusions besides the following ranking order among these metaheuristics,
{ABC, PSO, TLBO, GA, COA} i.e. ABC and COA are the best and the worst methods
from the performance point of view, respectively. In addition, from the
convergence perspective, PSO and ABC have significant better convergence for
unimodal and multimodal functions, respectively, while GA and COA have
premature convergence to local optima in many cases needing alternative
mutation mechanisms to enhance diversification and global search.
Related papers
- Genetic Engineering Algorithm (GEA): An Efficient Metaheuristic
Algorithm for Solving Combinatorial Optimization Problems [1.8434042562191815]
Genetic Algorithms (GAs) are known for their efficiency in solving optimization problems.
This paper proposes a new metaheuristic algorithm called the Genetic Engineering Algorithm (GEA) that draws inspiration from genetic engineering concepts.
arXiv Detail & Related papers (2023-09-28T13:05:30Z) - Discovering Attention-Based Genetic Algorithms via Meta-Black-Box
Optimization [13.131971623143622]
We discover entirely new genetic algorithms in a data-driven fashion.
We parametrize selection and mutation rate adaptation as cross- and self-attention modules.
The learned algorithm can be applied to previously unseen optimization problems, search dimensions & evaluation budgets.
arXiv Detail & Related papers (2023-04-08T12:14:15Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function [2.7716102039510564]
We propose a novel method to estimate the elite individual to accelerate the convergence of optimization.
Our proposal has a broad prospect to estimate the elite individual and accelerate the convergence of optimization.
arXiv Detail & Related papers (2022-10-13T07:56:47Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - The importance of being constrained: dealing with infeasible solutions
in Differential Evolution and beyond [0.0]
We argue that results produced by an optimisation algorithm cannot be considered reproducible unless the algorithm fully specifies what should be done with solutions generated outside the domain.
Here, we demonstrate that, at least in algorithms based on Differential Evolution, this choice induces notably different behaviours.
We urge the field of optimisation to formalise and adopt the idea of a new algorithmic component in optimisers.
arXiv Detail & Related papers (2022-03-07T16:49:18Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - cMLSGA: A Co-Evolutionary Multi-Level Selection Genetic Algorithm for
Multi-Objective Optimization [0.0]
Multi-Level Selection Genetic Algorithm (MLSGA) already shows good performance on range of problems.
This paper proposes a distinct set of co-evolutionary mechanisms, which defines co-evolution as competition between collectives rather than individuals.
arXiv Detail & Related papers (2021-04-22T13:52:21Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.