Gradient Based Hybridization of PSO
- URL: http://arxiv.org/abs/2312.09703v1
- Date: Fri, 15 Dec 2023 11:26:36 GMT
- Title: Gradient Based Hybridization of PSO
- Authors: Arun K Pujari, Sowmini Devi Veeramachaneni
- Abstract summary: Particle Swarm Optimization (PSO) has emerged as a powerful metaheuristic global optimization approach over the past three decades.
PSO faces challenges, such as premature stagnation in single-objective scenarios and the need to strike a balance between exploration and exploitation.
Hybridizing PSO by integrating its cooperative nature with established optimization techniques from diverse paradigms offers a promising solution.
- Score: 1.1059341532498634
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Particle Swarm Optimization (PSO) has emerged as a powerful metaheuristic
global optimization approach over the past three decades. Its appeal lies in
its ability to tackle complex multidimensional problems that defy conventional
algorithms. However, PSO faces challenges, such as premature stagnation in
single-objective scenarios and the need to strike a balance between exploration
and exploitation. Hybridizing PSO by integrating its cooperative nature with
established optimization techniques from diverse paradigms offers a promising
solution. In this paper, we investigate various strategies for synergizing
gradient-based optimizers with PSO. We introduce different hybridization
principles and explore several approaches, including sequential decoupled
hybridization, coupled hybridization, and adaptive hybridization. These
strategies aim to enhance the efficiency and effectiveness of PSO, ultimately
improving its ability to navigate intricate optimization landscapes. By
combining the strengths of gradient-based methods with the inherent social
dynamics of PSO, we seek to address the critical objectives of intelligent
exploration and exploitation in complex optimization tasks. Our study delves
into the comparative merits of these hybridization techniques and offers
insights into their application across different problem domains.
Related papers
- Hybrid Reinforcement Learning Framework for Mixed-Variable Problems [0.7146036252503987]
We introduce a hybrid Reinforcement Learning (RL) framework that synergizes RL for discrete variable selection with Bayesian Optimization for continuous variable adjustment.
Our method consistently outperforms traditional RL, random search, and standalone Bayesian optimization in terms of effectiveness and efficiency.
arXiv Detail & Related papers (2024-05-30T21:42:33Z) - Federated Multi-Level Optimization over Decentralized Networks [55.776919718214224]
We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
arXiv Detail & Related papers (2023-10-10T00:21:10Z) - Enhancing Optimization Performance: A Novel Hybridization of Gaussian
Crunching Search and Powell's Method for Derivative-Free Optimization [0.0]
We present a novel approach to enhance optimization performance through the hybridization of Gaussian Crunching Search (GCS) and Powell's Method for derivative-free optimization.
This hybrid approach opens up new possibilities for optimizing complex systems and finding optimal solutions in a range of applications.
arXiv Detail & Related papers (2023-08-09T01:27:04Z) - Applying Autonomous Hybrid Agent-based Computing to Difficult
Optimization Problems [56.821213236215634]
This paper focuses on a proposed hybrid version of the EMAS.
It covers selection and introduction of a number of hybrid operators and defining rules for starting the hybrid steps of the main algorithm.
Those hybrid steps leverage existing, well-known and proven to be efficient metaheuristics, and integrate their results into the main algorithm.
arXiv Detail & Related papers (2022-10-24T13:28:35Z) - Cooperative guidance of multiple missiles: a hybrid co-evolutionary
approach [0.9176056742068814]
Cooperative guidance of multiple missiles is a challenging task with rigorous constraints of time and space consensus.
This paper develops a novel natural co-evolutionary strategy (NCES) to address the issues of non-stationarity and continuous control faced by cooperative guidance.
A hybrid co-evolutionary cooperative guidance law (HCCGL) is proposed by integrating the highly scalable co-evolutionary mechanism and the traditional guidance strategy.
arXiv Detail & Related papers (2022-08-15T12:59:38Z) - Accelerated Federated Learning with Decoupled Adaptive Optimization [53.230515878096426]
federated learning (FL) framework enables clients to collaboratively learn a shared model while keeping privacy of training data on clients.
Recently, many iterations efforts have been made to generalize centralized adaptive optimization methods, such as SGDM, Adam, AdaGrad, etc., to federated settings.
This work aims to develop novel adaptive optimization methods for FL from the perspective of dynamics of ordinary differential equations (ODEs)
arXiv Detail & Related papers (2022-07-14T22:46:43Z) - VNE Strategy based on Chaotic Hybrid Flower Pollination Algorithm
Considering Multi-criteria Decision Making [12.361459296815559]
Design strategy of hybrid flower pollination algorithm for Virtual Network Embedding (VNE) problem is discussed.
Cross operation is used to replace the cross-pollination operation to complete the global search.
Life cycle mechanism is introduced as a complement to the traditional fitness-based selection strategy.
arXiv Detail & Related papers (2022-02-07T00:57:00Z) - Multi-Objective Constrained Optimization for Energy Applications via
Tree Ensembles [55.23285485923913]
Energy systems optimization problems are complex due to strongly non-linear system behavior and multiple competing objectives.
In some cases, proposed optimal solutions need to obey explicit input constraints related to physical properties or safety-critical operating conditions.
This paper proposes a novel data-driven strategy using tree ensembles for constrained multi-objective optimization of black-box problems.
arXiv Detail & Related papers (2021-11-04T20:18:55Z) - Hybrid Henry Gas Solubility Optimization Algorithm with Dynamic
Cluster-to-Algorithm Mapping for Search-based Software Engineering Problems [1.0323063834827413]
This paper discusses a new variant of the Henry Gas Solubility Optimization (HGSO) Algorithm, called Hybrid HGSO (HHGSO)
Unlike its predecessor, HHGSO allows multiple clusters serving different individual meta-heuristic algorithms to coexist within the same population.
Exploiting the dynamic cluster-to-algorithm mapping via penalized and reward model with adaptive switching factor, HHGSO offers a novel approach for meta-heuristic hybridization.
arXiv Detail & Related papers (2021-05-31T12:42:15Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Cross Entropy Hyperparameter Optimization for Constrained Problem
Hamiltonians Applied to QAOA [68.11912614360878]
Hybrid quantum-classical algorithms such as Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications.
Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem.
In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical parameter to find better parameters more easily and hence results in an improved performance.
arXiv Detail & Related papers (2020-03-11T13:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.