Accelerating Evolution: Integrating PSO Principles into Real-Coded Genetic Algorithm Crossover
- URL: http://arxiv.org/abs/2505.03217v1
- Date: Tue, 06 May 2025 06:17:57 GMT
- Title: Accelerating Evolution: Integrating PSO Principles into Real-Coded Genetic Algorithm Crossover
- Authors: Xiaobo Jin, JiaShu Tu,
- Abstract summary: This study introduces an innovative crossover operator named Particle Swarm Optimization-inspired Crossover (PSOX)<n>PSOX uniquely incorporates guidance from both the current global best solution and historical optimal solutions across multiple generations.<n>This novel mechanism enables the algorithm to maintain population diversity while simultaneously accelerating convergence toward promising regions of the search space.
- Score: 2.854482269849925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study introduces an innovative crossover operator named Particle Swarm Optimization-inspired Crossover (PSOX), which is specifically developed for real-coded genetic algorithms. Departing from conventional crossover approaches that only exchange information between individuals within the same generation, PSOX uniquely incorporates guidance from both the current global best solution and historical optimal solutions across multiple generations. This novel mechanism enables the algorithm to maintain population diversity while simultaneously accelerating convergence toward promising regions of the search space. The effectiveness of PSOX is rigorously evaluated through comprehensive experiments on 15 benchmark test functions with diverse characteristics, including unimodal, multimodal, and highly complex landscapes. Comparative analysis against five state-of-the-art crossover operators reveals that PSOX consistently delivers superior performance in terms of solution accuracy, algorithmic stability, and convergence speed, especially when combined with an appropriate mutation strategy. Furthermore, the study provides an in-depth investigation of how different mutation rates influence PSOX's performance, yielding practical guidelines for parameter tuning when addressing optimization problems with varying landscape properties.
Related papers
- Divergence Minimization Preference Optimization for Diffusion Model Alignment [58.651951388346525]
Divergence Minimization Preference Optimization (DMPO) is a principled method for aligning diffusion models by minimizing reverse KL divergence.<n>Our results show that diffusion models fine-tuned with DMPO can consistently outperform or match existing techniques.<n>DMPO unlocks a robust and elegant pathway for preference alignment, bridging principled theory with practical performance in diffusion models.
arXiv Detail & Related papers (2025-07-10T07:57:30Z) - Preference Optimization for Combinatorial Optimization Problems [54.87466279363487]
Reinforcement Learning (RL) has emerged as a powerful tool for neural optimization, enabling models learns that solve complex problems without requiring expert knowledge.<n>Despite significant progress, existing RL approaches face challenges such as diminishing reward signals and inefficient exploration in vast action spaces.<n>We propose Preference Optimization, a novel method that transforms quantitative reward signals into qualitative preference signals via statistical comparison modeling.
arXiv Detail & Related papers (2025-05-13T16:47:00Z) - Evolutionary Policy Optimization [47.30139909878251]
On-policy reinforcement learning (RL) algorithms are widely used for their strong performance and training stability, but they struggle to scale with larger batch sizes.<n>We propose Evolutionary Policy Optimization (EPO), a hybrid that combines the scalability and diversity of EAs with the performance and stability of policy gradients.
arXiv Detail & Related papers (2025-03-24T18:08:54Z) - Constrained Hybrid Metaheuristic Algorithm for Probabilistic Neural Networks Learning [0.3686808512438362]
This study investigates the potential of hybrid metaheuristic algorithms to enhance the training of Probabilistic Neural Networks (PNNs)<n>Traditional learning methods, such as gradient-based approaches, often struggle to optimise high-dimensional and uncertain environments.<n>We propose the constrained Hybrid Metaheuristic (cHM) algorithm, which combines multiple population-based optimisation techniques into a unified framework.
arXiv Detail & Related papers (2025-01-26T19:49:16Z) - Heterogeneous Multi-Agent Reinforcement Learning for Distributed Channel Access in WLANs [47.600901884970845]
This paper investigates the use of multi-agent reinforcement learning (MARL) to address distributed channel access in wireless local area networks.<n>In particular, we consider the challenging yet more practical case where the agents heterogeneously adopt value-based or policy-based reinforcement learning algorithms to train the model.<n>We propose a heterogeneous MARL training framework, named QPMIX, which adopts a centralized training with distributed execution paradigm to enable heterogeneous agents to collaborate.
arXiv Detail & Related papers (2024-12-18T13:50:31Z) - MoME: Mixture of Multimodal Experts for Cancer Survival Prediction [46.520971457396726]
Survival analysis, as a challenging task, requires integrating Whole Slide Images (WSIs) and genomic data for comprehensive decision-making.
Previous approaches utilize co-attention methods, which fuse features from both modalities only once after separate encoding.
We propose a Biased Progressive Clever (BPE) paradigm, performing encoding and fusion simultaneously.
arXiv Detail & Related papers (2024-06-14T03:44:33Z) - Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
Methods [73.35353358543507]
Gradient Descent-Ascent (SGDA) is one of the most prominent algorithms for solving min-max optimization and variational inequalities problems (VIP)
In this paper, we propose a unified convergence analysis that covers a large variety of descent-ascent methods.
We develop several new variants of SGDA such as a new variance-reduced method (L-SVRGDA), new distributed methods with compression (QSGDA, DIANA-SGDA, VR-DIANA-SGDA), and a new method with coordinate randomization (SEGA-SGDA)
arXiv Detail & Related papers (2022-02-15T09:17:39Z) - VNE Strategy based on Chaotic Hybrid Flower Pollination Algorithm
Considering Multi-criteria Decision Making [12.361459296815559]
Design strategy of hybrid flower pollination algorithm for Virtual Network Embedding (VNE) problem is discussed.
Cross operation is used to replace the cross-pollination operation to complete the global search.
Life cycle mechanism is introduced as a complement to the traditional fitness-based selection strategy.
arXiv Detail & Related papers (2022-02-07T00:57:00Z) - Chaos inspired Particle Swarm Optimization with Levy Flight for Genome
Sequence Assembly [0.0]
In this paper, we propose a new variant of PSO to address the permutation-optimization problem.
PSO is integrated with the Chaos and Levy Flight (A random walk algorithm) to effectively balance the exploration and exploitation capability of the algorithm.
Empirical experiments are conducted to evaluate the performance of the proposed method in comparison to the other variants of PSO proposed in the literature.
arXiv Detail & Related papers (2021-10-20T15:24:27Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - cMLSGA: A Co-Evolutionary Multi-Level Selection Genetic Algorithm for
Multi-Objective Optimization [0.0]
Multi-Level Selection Genetic Algorithm (MLSGA) already shows good performance on range of problems.
This paper proposes a distinct set of co-evolutionary mechanisms, which defines co-evolution as competition between collectives rather than individuals.
arXiv Detail & Related papers (2021-04-22T13:52:21Z) - AT-MFCGA: An Adaptive Transfer-guided Multifactorial Cellular Genetic
Algorithm for Evolutionary Multitasking [17.120962133525225]
We introduce a novel adaptive metaheuristic algorithm to deal with Evolutionary Multitasking environments.
AT-MFCGA relies on cellular automata to implement mechanisms in order to exchange knowledge among the optimization problems under consideration.
arXiv Detail & Related papers (2020-10-08T12:00:10Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.