Orthogonally Initiated Particle Swarm Optimization with Advanced Mutation for Real-Parameter Optimization
- URL: http://arxiv.org/abs/2405.12542v1
- Date: Tue, 21 May 2024 07:16:20 GMT
- Title: Orthogonally Initiated Particle Swarm Optimization with Advanced Mutation for Real-Parameter Optimization
- Authors: Indu Bala, Dikshit Chauhan, Lewis Mitchell,
- Abstract summary: This article introduces an enhanced particle swarm (PSO), termed Orthogonal PSO with Mutation (OPSO-m)
It proposes an array-based learning approach to cultivate an improved initial swarm for PSO, significantly boosting the adaptability of swarm-based optimization algorithms.
The article further presents archive-based self-adaptive learning strategies, dividing the population into regular and elite subgroups.
- Score: 0.04096453902709291
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article introduces an enhanced particle swarm optimizer (PSO), termed Orthogonal PSO with Mutation (OPSO-m). Initially, it proposes an orthogonal array-based learning approach to cultivate an improved initial swarm for PSO, significantly boosting the adaptability of swarm-based optimization algorithms. The article further presents archive-based self-adaptive learning strategies, dividing the population into regular and elite subgroups. Each subgroup employs distinct learning mechanisms. The regular group utilizes efficient learning schemes derived from three unique archives, which categorize individuals based on their quality levels. Additionally, a mutation strategy is implemented to update the positions of elite individuals. Comparative studies are conducted to assess the effectiveness of these learning strategies in OPSO-m, evaluating its optimization capacity through exploration-exploitation dynamics and population diversity analysis. The proposed OPSO-m model is tested on real-parameter challenges from the CEC 2017 suite in 10, 30, 50, and 100-dimensional search spaces, with its results compared to contemporary state-of-the-art algorithms using a sensitivity metric. OPSO-m exhibits distinguished performance in the precision of solutions, rapidity of convergence, efficiency in search, and robust stability, thus highlighting its superior aptitude for resolving intricate optimization issues.
Related papers
- Optimizing Variational Quantum Circuits Using Metaheuristic Strategies in Reinforcement Learning [2.7504809152812695]
This work explores the integration of metaheuristic algorithms -- Particle Swarm Optimization, Ant Colony Optimization, Tabu Search, Genetic Algorithm, Simulated Annealing, and Harmony Search -- into Quantum Reinforcement Learning.
Evaluations in $5times5$ MiniGrid Reinforcement Learning environments show that, all algorithms yield near-optimal results.
arXiv Detail & Related papers (2024-08-02T11:14:41Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.
We identify two pivotal factors in model parameter learning: update direction and update method.
In particular, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - A new simplified MOPSO based on Swarm Elitism and Swarm Memory: MO-ETPSO [0.0]
Elitist PSO (MO-ETPSO) is adapted for multi-objective optimization problems.
The proposed algorithm integrates core strategies from the well-established NSGA-II approach.
A novel aspect of the algorithm is the introduction of a swarm memory and swarm elitism.
arXiv Detail & Related papers (2024-02-20T09:36:18Z) - Enhancing Optimization Through Innovation: The Multi-Strategy Improved
Black Widow Optimization Algorithm (MSBWOA) [11.450701963760817]
This paper introduces a Multi-Strategy Improved Black Widow Optimization Algorithm (MSBWOA)
It is designed to enhance the performance of the standard Black Widow Algorithm (BW) in solving complex optimization problems.
It integrates four key strategies: initializing the population using Tent chaotic mapping to enhance diversity and initial exploratory capability; implementing mutation optimization on the least fit individuals to maintain dynamic population and prevent premature convergence; and adding a random perturbation strategy to enhance the algorithm's ability to escape local optima.
arXiv Detail & Related papers (2023-12-20T19:55:36Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Chaos inspired Particle Swarm Optimization with Levy Flight for Genome
Sequence Assembly [0.0]
In this paper, we propose a new variant of PSO to address the permutation-optimization problem.
PSO is integrated with the Chaos and Levy Flight (A random walk algorithm) to effectively balance the exploration and exploitation capability of the algorithm.
Empirical experiments are conducted to evaluate the performance of the proposed method in comparison to the other variants of PSO proposed in the literature.
arXiv Detail & Related papers (2021-10-20T15:24:27Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - Meta Learning Black-Box Population-Based Optimizers [0.0]
We propose the use of meta-learning to infer population-based blackbox generalizations.
We show that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context.
arXiv Detail & Related papers (2021-03-05T08:13:25Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.