Combining Particle Swarm Optimizer with SQP Local Search for Constrained
Optimization Problems
- URL: http://arxiv.org/abs/2101.10936v1
- Date: Mon, 25 Jan 2021 09:34:52 GMT
- Title: Combining Particle Swarm Optimizer with SQP Local Search for Constrained
Optimization Problems
- Authors: Carwyn Pelley, Mauro S. Innocente, Johann Sienz
- Abstract summary: It is shown that the likely difference between leading algorithms are in their local search ability.
A comparison with other leadings on the tested benchmark suite, indicate the hybrid GP-PSO with implemented local search to compete along side other leading PSO algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The combining of a General-Purpose Particle Swarm Optimizer (GP-PSO) with
Sequential Quadratic Programming (SQP) algorithm for constrained optimization
problems has been shown to be highly beneficial to the refinement, and in some
cases, the success of finding a global optimum solution. It is shown that the
likely difference between leading algorithms are in their local search ability.
A comparison with other leading optimizers on the tested benchmark suite,
indicate the hybrid GP-PSO with implemented local search to compete along side
other leading PSO algorithms.
Related papers
- A Random-Key Optimizer for Combinatorial Optimization [0.0]
The Random-Key hubs (RKO) is a versatile and efficient local search method tailored for optimization problems.
Using the random-key concept, RKO encodes solutions as vectors of random keys that are subsequently decoded into feasible solutions via problem-specific decoders.
The RKO framework is able to combine a plethora of classic metaheuristics, each capable of operating independently or in parallel, with solution sharing facilitated through an elite solution pool.
arXiv Detail & Related papers (2024-11-06T22:23:29Z) - The Firefighter Algorithm: A Hybrid Metaheuristic for Optimization Problems [3.2432648012273346]
The Firefighter Optimization (FFO) algorithm is a new hybrid metaheuristic for optimization problems.
To evaluate the performance of FFO, extensive experiments were conducted, wherein the FFO was examined against 13 commonly used optimization algorithms.
The results demonstrate that FFO achieves comparative performance and, in some scenarios, outperforms commonly adopted optimization algorithms in terms of the obtained fitness, time taken for exaction, and research space covered per unit of time.
arXiv Detail & Related papers (2024-06-01T18:38:59Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Massively Parallel Genetic Optimization through Asynchronous Propagation
of Populations [50.591267188664666]
Propulate is an evolutionary optimization algorithm and software package for global optimization.
We provide an MPI-based implementation of our algorithm, which features variants of selection, mutation, crossover, and migration.
We find that Propulate is up to three orders of magnitude faster without sacrificing solution accuracy.
arXiv Detail & Related papers (2023-01-20T18:17:34Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Evolutionary Algorithm and Multifactorial Evolutionary Algorithm on
Clustered Shortest-Path Tree problem [2.578242050187029]
Clustered Shortest-Path Tree Problem (CluSPT) is an NP-hard problem.
To enhance the performance of the search process, two approaches are proposed.
arXiv Detail & Related papers (2020-10-19T08:37:18Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z) - Generalized Self-Adapting Particle Swarm Optimization algorithm with
archive of samples [0.0]
The paper introduces a new version of the algorithm, abbreviated as M-GAPSO.
In comparison with the original GAPSO formulation it includes the following four features: a global restart management scheme, samples gathering within an R-Tree based index, adaptation of a sampling behavior based on a global particle performance, and a specific approach to local search.
arXiv Detail & Related papers (2020-02-28T00:03:17Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.