Generalized Self-Adapting Particle Swarm Optimization algorithm with
archive of samples
- URL: http://arxiv.org/abs/2002.12485v1
- Date: Fri, 28 Feb 2020 00:03:17 GMT
- Title: Generalized Self-Adapting Particle Swarm Optimization algorithm with
archive of samples
- Authors: Micha{\l} Okulewicz, Mateusz Zaborski, Jacek Ma\'ndziuk
- Abstract summary: The paper introduces a new version of the algorithm, abbreviated as M-GAPSO.
In comparison with the original GAPSO formulation it includes the following four features: a global restart management scheme, samples gathering within an R-Tree based index, adaptation of a sampling behavior based on a global particle performance, and a specific approach to local search.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we enhance Generalized Self-Adapting Particle Swarm
Optimization algorithm (GAPSO), initially introduced at the Parallel Problem
Solving from Nature 2018 conference, and to investigate its properties. The
research on GAPSO is underlined by the two following assumptions: (1) it is
possible to achieve good performance of an optimization algorithm through
utilization of all of the gathered samples, (2) the best performance can be
accomplished by means of a combination of specialized sampling behaviors
(Particle Swarm Optimization, Differential Evolution, and locally fitted square
functions). From a software engineering point of view, GAPSO considers a
standard Particle Swarm Optimization algorithm as an ideal starting point for
creating a generalpurpose global optimization framework. Within this framework
hybrid optimization algorithms are developed, and various additional techniques
(like algorithm restart management or adaptation schemes) are tested. The paper
introduces a new version of the algorithm, abbreviated as M-GAPSO. In
comparison with the original GAPSO formulation it includes the following four
features: a global restart management scheme, samples gathering within an
R-Tree based index (archive/memory of samples), adaptation of a sampling
behavior based on a global particle performance, and a specific approach to
local search. The above-mentioned enhancements resulted in improved performance
of M-GAPSO over GAPSO, observed on both COCO BBOB testbed and in the black-box
optimization competition BBComp. Also, for lower dimensionality functions (up
to 5D) results of M-GAPSO are better or comparable to the state-of-the art
version of CMA-ES (namely the KL-BIPOP-CMA-ES algorithm presented at the GECCO
2017 conference).
Related papers
- The Firefighter Algorithm: A Hybrid Metaheuristic for Optimization Problems [3.2432648012273346]
The Firefighter Optimization (FFO) algorithm is a new hybrid metaheuristic for optimization problems.
To evaluate the performance of FFO, extensive experiments were conducted, wherein the FFO was examined against 13 commonly used optimization algorithms.
The results demonstrate that FFO achieves comparative performance and, in some scenarios, outperforms commonly adopted optimization algorithms in terms of the obtained fitness, time taken for exaction, and research space covered per unit of time.
arXiv Detail & Related papers (2024-06-01T18:38:59Z) - Ant Colony Sampling with GFlowNets for Combinatorial Optimization [68.84985459701007]
Generative Flow Ant Colony Sampler (GFACS) is a novel meta-heuristic method that hierarchically combines amortized inference and parallel search.
Our method first leverages Generative Flow Networks (GFlowNets) to amortize a multi-modal prior distribution over a solution space.
arXiv Detail & Related papers (2024-03-11T16:26:06Z) - A new simplified MOPSO based on Swarm Elitism and Swarm Memory: MO-ETPSO [0.0]
Elitist PSO (MO-ETPSO) is adapted for multi-objective optimization problems.
The proposed algorithm integrates core strategies from the well-established NSGA-II approach.
A novel aspect of the algorithm is the introduction of a swarm memory and swarm elitism.
arXiv Detail & Related papers (2024-02-20T09:36:18Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Bidirectional Looking with A Novel Double Exponential Moving Average to
Adaptive and Non-adaptive Momentum Optimizers [109.52244418498974]
We propose a novel textscAdmeta (textbfADouble exponential textbfMov averagtextbfE textbfAdaptive and non-adaptive momentum) framework.
We provide two implementations, textscAdmetaR and textscAdmetaS, the former based on RAdam and the latter based on SGDM.
arXiv Detail & Related papers (2023-07-02T18:16:06Z) - PAO: A general particle swarm algorithm with exact dynamics and
closed-form transition densities [0.0]
Particle swarm optimisation (PSO) approaches have proven to be highly effective in a number of application areas.
In this work, a highly-general, interpretable variant of the PSO algorithm -- particle attractor algorithm (PAO) -- is proposed.
arXiv Detail & Related papers (2023-04-28T16:19:27Z) - Hybrid Evolutionary Optimization Approach for Oilfield Well Control
Optimization [0.0]
Oilfield production optimization is challenging due to subsurface model complexity and associated non-linearity.
This paper presents efficacy of two hybrid evolutionary optimization approaches for well control optimization of a waterflooding operation.
arXiv Detail & Related papers (2021-03-29T13:36:51Z) - Combining Particle Swarm Optimizer with SQP Local Search for Constrained
Optimization Problems [0.0]
It is shown that the likely difference between leading algorithms are in their local search ability.
A comparison with other leadings on the tested benchmark suite, indicate the hybrid GP-PSO with implemented local search to compete along side other leading PSO algorithms.
arXiv Detail & Related papers (2021-01-25T09:34:52Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.