Numerical Comparison of Neighbourhood Topologies in Particle Swarm
Optimization
- URL: http://arxiv.org/abs/2101.10935v1
- Date: Mon, 25 Jan 2021 09:23:55 GMT
- Title: Numerical Comparison of Neighbourhood Topologies in Particle Swarm
Optimization
- Authors: Mauro S. Innocente, Johann Sienz
- Abstract summary: This paper offers a comparison of the performances exhibited by five different neighbourhood topologies combined with four different coefficients' settings.
Despite the optimum topology being problem-dependent, it appears that dynamic neighbourhoods with the number of interconnections should be preferred.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Particle Swarm Optimization is a global optimizer in the sense that it has
the ability to escape poor local optima. However, if the spread of information
within the population is not adequately performed, premature convergence may
occur. The convergence speed and hence the reluctance of the algorithm to
getting trapped in suboptimal solutions are controlled by the settings of the
coefficients in the velocity update equation as well as by the neighbourhood
topology. The coefficients settings govern the trajectories of the particles
towards the good locations identified, whereas the neighbourhood topology
controls the form and speed of spread of information within the population
(i.e. the update of the social attractor). Numerous neighbourhood topologies
have been proposed and implemented in the literature. This paper offers a
numerical comparison of the performances exhibited by five different
neighbourhood topologies combined with four different coefficients' settings
when optimizing a set of benchmark unconstrained problems. Despite the optimum
topology being problem-dependent, it appears that dynamic neighbourhoods with
the number of interconnections increasing as the search progresses should be
preferred for a non-problem-specific optimizer.
Related papers
- A Gradient Meta-Learning Joint Optimization for Beamforming and Antenna Position in Pinching-Antenna Systems [63.213207442368294]
We consider a novel optimization design for multi-waveguide pinching-antenna systems.<n>The proposed GML-JO algorithm is robust to different choices and better performance compared with the existing optimization methods.
arXiv Detail & Related papers (2025-06-14T17:35:27Z) - Adam assisted Fully informed Particle Swarm Optimization ( Adam-FIPSO ) based Parameter Prediction for the Quantum Approximate Optimization Algorithm (QAOA) [1.024113475677323]
The Quantum Approximate Optimization Algorithm (QAOA) is a prominent variational algorithm used for solving optimization problems such as the Max-Cut problem.<n>A key challenge in QAOA lies in efficiently identifying suitable parameters that lead to high-quality solutions.
arXiv Detail & Related papers (2025-06-07T13:14:41Z) - Distributed Optimization via Energy Conservation Laws in Dilated Coordinates [5.35599092568615]
This paper introduces an energy conservation approach for analyzing continuous-time dynamical systems in dilated coordinates.
convergence rates can be explicitly expressed in terms of the inverse time-dilation factor.
Its accelerated convergence behavior is benchmarked against various state-of-the-art distributed optimization algorithms on practical, large-scale problems.
arXiv Detail & Related papers (2024-09-28T08:02:43Z) - Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks [57.24087627267086]
We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network.
Lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established.
We develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.
arXiv Detail & Related papers (2024-05-28T10:28:45Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Accelerated Distributed Aggregative Optimization [5.5491171448159715]
We propose two novel algorithms named DAGT-HB and DAGT-NES for solving the distributed aggregative optimization problem.
We analyse that the DAGT-HB and DAGT-NES algorithms can converge to an optimal solution at a global $mathbfR-$linear convergence rate.
arXiv Detail & Related papers (2023-04-17T08:11:01Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Using Particle Swarm Optimization as Pathfinding Strategy in a Space
with Obstacles [4.899469599577755]
Particle swarm optimization (PSO) is a search algorithm based on and population-based adaptive optimization.
In this paper, a pathfinding strategy is proposed to improve the efficiency of path planning for a broad range of applications.
arXiv Detail & Related papers (2021-12-16T12:16:02Z) - Implicit differentiation for fast hyperparameter selection in non-smooth
convex learning [87.60600646105696]
We study first-order methods when the inner optimization problem is convex but non-smooth.
We show that the forward-mode differentiation of proximal gradient descent and proximal coordinate descent yield sequences of Jacobians converging toward the exact Jacobian.
arXiv Detail & Related papers (2021-05-04T17:31:28Z) - Combining Particle Swarm Optimizer with SQP Local Search for Constrained
Optimization Problems [0.0]
It is shown that the likely difference between leading algorithms are in their local search ability.
A comparison with other leadings on the tested benchmark suite, indicate the hybrid GP-PSO with implemented local search to compete along side other leading PSO algorithms.
arXiv Detail & Related papers (2021-01-25T09:34:52Z) - Decentralized gradient methods: does topology matter? [23.1803761184887]
This paper shows how each worker maintains a local estimate of the optimal parameter vector and iteratively updates it by averaging the estimates obtained from its neighbors, and applying a correction on the basis of its local dataset.
While theoretical results suggest that worker communication topology should have strong impact on the number of epochs needed to converge, previous experiments have shown the opposite conclusion.
This paper sheds lights on this apparent contradiction and show how sparse topologies can lead to faster convergence even in the absence of communication delays.
arXiv Detail & Related papers (2020-02-28T12:59:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.