Performance Analysis of Meta-heuristic Algorithms for a Quadratic
Assignment Problem
- URL: http://arxiv.org/abs/2007.14885v1
- Date: Wed, 29 Jul 2020 15:02:07 GMT
- Title: Performance Analysis of Meta-heuristic Algorithms for a Quadratic
Assignment Problem
- Authors: Zohreh Raziei, Reza Tavakkoli-Moghaddam, Siavash Tabrizian
- Abstract summary: A quadratic assignment problem (QAP) is an optimization problem that belongs to the class of NP-hard ones.
Heuristics and meta-heuristics algorithm are prevalent solution methods for this problem.
This paper is one of comparative studies to apply different metaheuristic algorithms for solving the QAP.
- Score: 6.555180412600522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A quadratic assignment problem (QAP) is a combinatorial optimization problem
that belongs to the class of NP-hard ones. So, it is difficult to solve in the
polynomial time even for small instances. Research on the QAP has thus focused
on obtaining a method to overcome this problem. Heuristics and meta-heuristics
algorithm are prevalent solution methods for this problem. This paper is one of
comparative studies to apply different metaheuristic algorithms for solving the
QAP. One of the most popular approaches for categorizing meta-heuristic
algorithms is based on a search strategy, including (1) local search
improvement meta-heuristics and (2) global search-based meta-heuristics. The
matter that distinguishes this paper from the other is the comparative
performance of local and global search (both EA and SI), in which
meta-heuristics that consist of genetic algorithm (GA), particle swarm
optimization (PSO), hybrid GA-PSO, grey wolf optimization (GWO), harmony search
algorithm (HAS) and simulated annealing (SA). Also, one improvement heuristic
algorithm (ie, 2-Opt) is used to compare with others. The PSO, GWO and 2-Opt
algorithms are improved to achieve the better comparison toward the other
algorithms for evaluation. In order to analysis the comparative advantage of
these algorithms, eight different factors are presented. By taking into account
all these factors, the test is implemented in six test problems of the QAP
Library (QAPLIB) from different sizes. Another contribution of this paper is to
measure a strong convergence condition for each algorithm in a new way.
Related papers
- A Novel Ranking Scheme for the Performance Analysis of Stochastic Optimization Algorithms using the Principles of Severity [9.310464457958844]
We provide a novel ranking scheme to rank the algorithms over multiple single-objective optimization problems.
The results of the algorithms are compared using a robust bootstrapping-based hypothesis testing procedure.
arXiv Detail & Related papers (2024-05-31T19:35:34Z) - Quality-Diversity Algorithms Can Provably Be Helpful for Optimization [24.694984679399315]
Quality-Diversity (QD) algorithms aim to find a set of high-performing, yet diverse solutions.
This paper tries to shed some light on the optimization ability of QD algorithms via rigorous running time analysis.
arXiv Detail & Related papers (2024-01-19T07:40:24Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Combining K-means type algorithms with Hill Climbing for Joint
Stratification and Sample Allocation Designs [0.0]
This is a sample problem in which we search for the optimal stratification from the set of all possible stratifications of basic strata.
evaluating the cost of each solution is expensive.
We compare the above multi-stage combination of algorithms with three recent algorithms and report the solution costs, evaluation times and training times.
arXiv Detail & Related papers (2021-08-18T08:41:58Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Towards Meta-Algorithm Selection [78.13985819417974]
Instance-specific algorithm selection (AS) deals with the automatic selection of an algorithm from a fixed set of candidates.
We show that meta-algorithm-selection can indeed prove beneficial in some cases.
arXiv Detail & Related papers (2020-11-17T17:27:33Z) - Benchmarking Meta-heuristic Optimization [0.0]
Many meta-heuristic algorithms are very efficient when solving nonlinear functions.
A meta-heuristic algorithm is a problem-independent technique that can be applied to a broad range of problems.
arXiv Detail & Related papers (2020-07-27T12:25:31Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.