PAO: A general particle swarm algorithm with exact dynamics and
closed-form transition densities
- URL: http://arxiv.org/abs/2304.14956v1
- Date: Fri, 28 Apr 2023 16:19:27 GMT
- Title: PAO: A general particle swarm algorithm with exact dynamics and
closed-form transition densities
- Authors: Max D. Champneys and Timothy J. Rogers
- Abstract summary: Particle swarm optimisation (PSO) approaches have proven to be highly effective in a number of application areas.
In this work, a highly-general, interpretable variant of the PSO algorithm -- particle attractor algorithm (PAO) -- is proposed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A great deal of research has been conducted in the consideration of
meta-heuristic optimisation methods that are able to find global optima in
settings that gradient based optimisers have traditionally struggled. Of these,
so-called particle swarm optimisation (PSO) approaches have proven to be highly
effective in a number of application areas. Given the maturity of the PSO
field, it is likely that novel variants of the PSO algorithm stand to offer
only marginal gains in terms of performance -- there is, after all, no free
lunch. Instead of only chasing performance on suites of benchmark optimisation
functions, it is argued herein that research effort is better placed in the
pursuit of algorithms that also have other useful properties. In this work, a
highly-general, interpretable variant of the PSO algorithm -- particle
attractor algorithm (PAO) -- is proposed. Furthermore, the algorithm is
designed such that the transition densities (describing the motions of the
particles from one generation to the next) can be computed exactly in closed
form for each step. Access to closed-form transition densities has important
ramifications for the closely-related field of Sequential Monte Carlo (SMC). In
order to demonstrate that the useful properties do not come at the cost of
performance, PAO is compared to several other state-of-the art heuristic
optimisation algorithms in a benchmark comparison study.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Beyond Single-Model Views for Deep Learning: Optimization versus
Generalizability of Stochastic Optimization Algorithms [13.134564730161983]
This paper adopts a novel approach to deep learning optimization, focusing on gradient descent (SGD) and its variants.
We show that SGD and its variants demonstrate performance on par with flat-minimas like SAM, albeit with half the gradient evaluations.
Our study uncovers several key findings regarding the relationship between training loss and hold-out accuracy, as well as the comparable performance of SGD and noise-enabled variants.
arXiv Detail & Related papers (2024-03-01T14:55:22Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Using Particle Swarm Optimization as Pathfinding Strategy in a Space
with Obstacles [4.899469599577755]
Particle swarm optimization (PSO) is a search algorithm based on and population-based adaptive optimization.
In this paper, a pathfinding strategy is proposed to improve the efficiency of path planning for a broad range of applications.
arXiv Detail & Related papers (2021-12-16T12:16:02Z) - Directed particle swarm optimization with Gaussian-process-based
function forecasting [15.733136147164032]
Particle swarm optimization (PSO) is an iterative search method that moves a set of candidate solution around a search-space towards the best known global and local solutions with randomized step lengths.
We show that our algorithm attains desirable properties for exploratory and exploitative behavior.
arXiv Detail & Related papers (2021-02-08T13:02:57Z) - Motion-Encoded Particle Swarm Optimization for Moving Target Search
Using UAVs [4.061135251278187]
This paper presents a novel algorithm named the motion-encoded particle swarm optimization (MPSO) for finding a moving target with unmanned aerial vehicles (UAVs)
The proposed MPSO is developed to solve that problem by encoding the search trajectory as a series of UAV motion paths evolving over the generation of particles in a PSO algorithm.
Results from extensive simulations with existing methods show that the proposed MPSO improves the detection performance by 24% and time performance by 4.71 times compared to the original PSO.
arXiv Detail & Related papers (2020-10-05T14:17:49Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.