A Perturbation and Speciation-Based Algorithm for Dynamic Optimization Uninformed of Change
- URL: http://arxiv.org/abs/2505.11634v1
- Date: Fri, 16 May 2025 18:53:37 GMT
- Title: A Perturbation and Speciation-Based Algorithm for Dynamic Optimization Uninformed of Change
- Authors: Federico Signorelli, Anil Yaman,
- Abstract summary: Perturbation and Speciation-Based Particle Swarm Optimization (PSPSO) is a robust algorithm for uninformed dynamic optimization.<n> PSPSO combines speciation-based niching, deactivation, and a newly proposed random perturbation mechanism to handle DOPs.<n> PSPSO shows strength in functions with high dimensionality or high frequency of change in the Generalized Moving Peaks Benchmark.
- Score: 1.4425878137951238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic optimization problems (DOPs) are challenging due to their changing conditions. This requires algorithms to be highly adaptable and efficient in terms of finding rapidly new optimal solutions under changing conditions. Traditional approaches often depend on explicit change detection, which can be impractical or inefficient when the change detection is unreliable or unfeasible. We propose Perturbation and Speciation-Based Particle Swarm Optimization (PSPSO), a robust algorithm for uninformed dynamic optimization without requiring the information of environmental changes. The PSPSO combines speciation-based niching, deactivation, and a newly proposed random perturbation mechanism to handle DOPs. PSPSO leverages a cyclical multi-population framework, strategic resource allocation, and targeted noisy updates, to adapt to dynamic environments. We compare PSPSO with several state-of-the-art algorithms on the Generalized Moving Peaks Benchmark (GMPB), which covers a variety of scenarios, including simple and multi-modal dynamic optimization, frequent and intense changes, and high-dimensional spaces. Our results show that PSPSO outperforms other state-of-the-art uninformed algorithms in all scenarios and leads to competitive results compared to informed algorithms. In particular, PSPSO shows strength in functions with high dimensionality or high frequency of change in the GMPB. The ablation study showed the importance of the random perturbation component.
Related papers
- RAG/LLM Augmented Switching Driven Polymorphic Metaheuristic Framework [5.10888539576355]
Polymorphic Metaheuristic Framework (PMF) is a self-adaptive metaheuristic switching mechanism driven by real-time performance feedback and dynamic algorithmic selection.<n>By integrating AI-driven decision-making and self-correcting mechanisms, PMF paves the way for scalable, intelligent, and autonomous optimization frameworks.
arXiv Detail & Related papers (2025-05-20T01:41:22Z) - Quantum Inspired Chaotic Salp Swarm Optimization for Dynamic Optimization [4.44483539967295]
We study a variant of SSA known as QSSO, which integrates the principles of quantum computing.
A chaotic operator is employed with quantum computing to respond to change and guarantee to increase individual searchability.
As promised, the introduced QCSSO is discovered as the rival algorithm for DOPs.
arXiv Detail & Related papers (2024-01-21T02:59:37Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - On the Impact of Operators and Populations within Evolutionary
Algorithms for the Dynamic Weighted Traveling Salesperson Problem [13.026567958569965]
We investigate the node weighted traveling salesperson problem (W-TSP) in dynamic settings.
In the dynamic setting of the problem, items that have to be collected as part of a TSP tour change over time.
Our first experimental investigations study the impact of such changes on resulting optimized tours.
arXiv Detail & Related papers (2023-05-30T11:39:49Z) - Vector Autoregressive Evolution for Dynamic Multi-Objective Optimisation [7.5104598146227]
Dynamic multi-objective optimisation (DMO) handles optimisation problems with multiple objectives in varying environments.
This paper proposes vector autoregressive evolution (VARE) consisting of vector autoregression ( VAR) and environment-aware hypermutation to address environmental changes in DMO.
arXiv Detail & Related papers (2023-05-22T06:24:25Z) - Reproducibility and Baseline Reporting for Dynamic Multi-objective
Benchmark Problems [4.859986264602551]
This paper focuses on the simulation experiments for parameters of DMOPs.
A baseline schema for dynamic algorithm evaluation is introduced.
We can establish the minimum capability required of purpose-built dynamic algorithms to be useful.
arXiv Detail & Related papers (2022-04-08T15:50:17Z) - Learning Robust Policy against Disturbance in Transition Dynamics via
State-Conservative Policy Optimization [63.75188254377202]
Deep reinforcement learning algorithms can perform poorly in real-world tasks due to discrepancy between source and target environments.
We propose a novel model-free actor-critic algorithm to learn robust policies without modeling the disturbance in advance.
Experiments in several robot control tasks demonstrate that SCPO learns robust policies against the disturbance in transition dynamics.
arXiv Detail & Related papers (2021-12-20T13:13:05Z) - Result Diversification by Multi-objective Evolutionary Algorithms with
Theoretical Guarantees [94.72461292387146]
We propose to reformulate the result diversification problem as a bi-objective search problem, and solve it by a multi-objective evolutionary algorithm (EA)
We theoretically prove that the GSEMO can achieve the optimal-time approximation ratio, $1/2$.
When the objective function changes dynamically, the GSEMO can maintain this approximation ratio in running time, addressing the open question proposed by Borodin et al.
arXiv Detail & Related papers (2021-10-18T14:00:22Z) - Robust Value Iteration for Continuous Control Tasks [99.00362538261972]
When transferring a control policy from simulation to a physical system, the policy needs to be robust to variations in the dynamics to perform well.
We present Robust Fitted Value Iteration, which uses dynamic programming to compute the optimal value function on the compact state domain.
We show that robust value is more robust compared to deep reinforcement learning algorithm and the non-robust version of the algorithm.
arXiv Detail & Related papers (2021-05-25T19:48:35Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Single- and Multi-Objective Evolutionary Algorithms for the Knapsack
Problem with Dynamically Changing Constraints [13.896724650508087]
We investigate single- and multi-objective baseline evolutionary algorithms for the classical knapsack problem.
Our results show that the multi-objective approaches using a population that caters for dynamic changes have a clear advantage on many benchmarks scenarios.
arXiv Detail & Related papers (2020-04-27T03:50:24Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.