Benchmarking Meta-heuristic Optimization
- URL: http://arxiv.org/abs/2007.13476v1
- Date: Mon, 27 Jul 2020 12:25:31 GMT
- Title: Benchmarking Meta-heuristic Optimization
- Authors: Mona Nasr, Omar Farouk, Ahmed Mohamedeen, Ali Elrafie, Marwan Bedeir
and Ali Khaled
- Abstract summary: Many meta-heuristic algorithms are very efficient when solving nonlinear functions.
A meta-heuristic algorithm is a problem-independent technique that can be applied to a broad range of problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving an optimization task in any domain is a very challenging problem,
especially when dealing with nonlinear problems and non-convex functions. Many
meta-heuristic algorithms are very efficient when solving nonlinear functions.
A meta-heuristic algorithm is a problem-independent technique that can be
applied to a broad range of problems. In this experiment, some of the
evolutionary algorithms will be tested, evaluated, and compared with each
other. We will go through the Genetic Algorithm\, Differential Evolution,
Particle Swarm Optimization Algorithm, Grey Wolf Optimizer, and Simulated
Annealing. They will be evaluated against the performance from many points of
view like how the algorithm performs throughout generations and how the
algorithm's result is close to the optimal result. Other points of evaluation
are discussed in depth in later sections.
Related papers
- A Generalized Evolutionary Metaheuristic (GEM) Algorithm for Engineering Optimization [1.6589012298747952]
A major trend in recent years is the use of nature-inspired metaheustic algorithms (NIMA)
There are over 540 algorithms in the literature, and there is no unified framework to understand the search mechanisms of different algorithms.
We propose a generalized evolutionary metaheuristic algorithm to unify more than 20 different algorithms.
arXiv Detail & Related papers (2024-07-02T09:55:15Z) - Performance Evaluation of Evolutionary Algorithms for Analog Integrated
Circuit Design Optimisation [0.0]
An automated sizing approach for analog circuits is presented in this paper.
A targeted search of the search space has been implemented using a particle generation function and a repair-bounds function.
The algorithms are tuned and modified to converge to a better optimal solution.
arXiv Detail & Related papers (2023-10-19T03:26:36Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Optimizing Optimizers: Regret-optimal gradient descent algorithms [9.89901717499058]
We study the existence, uniqueness and consistency of regret-optimal algorithms.
By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics.
We present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret.
arXiv Detail & Related papers (2020-12-31T19:13:53Z) - Recent Theoretical Advances in Non-Convex Optimization [56.88981258425256]
Motivated by recent increased interest in analysis of optimization algorithms for non- optimization in deep networks and other problems in data, we give an overview of recent results of theoretical optimization algorithms for non- optimization.
arXiv Detail & Related papers (2020-12-11T08:28:51Z) - Adaptive and Universal Algorithms for Variational Inequalities with
Optimal Convergence [29.189409618561964]
We develop new adaptive algorithms for variational inequalities with monotone operators.
Our algorithms automatically adapt to unknown problem parameters.
We show that our algorithms are universal and simultaneously achieve the optimal convergence rates.
arXiv Detail & Related papers (2020-10-15T14:44:26Z) - Optimal and Practical Algorithms for Smooth and Strongly Convex
Decentralized Optimization [21.555331273873175]
We consider the task of decentralized minimization of the sum of smooth strongly convex functions stored across the nodes of a network.
We propose two new algorithms for this decentralized optimization problem and equip them with complexity guarantees.
arXiv Detail & Related papers (2020-06-21T11:23:20Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.