Improving population size adapting CMA-ES algorithm on step-size blow-up in weakly-structured multimodal functions
- URL: http://arxiv.org/abs/2506.00825v1
- Date: Sun, 01 Jun 2025 04:10:44 GMT
- Title: Improving population size adapting CMA-ES algorithm on step-size blow-up in weakly-structured multimodal functions
- Authors: Chandula Fernando, Kushani De Silva,
- Abstract summary: The balance between exploration and exploitation can be maintained by adjusting parameter settings.<n>The population size adaptation covariance matrix adaption evolutionary strategy algorithm (PSA-CMA-ES) achieves this balance by dynamically adjusting population size.<n>In weakly structured multimodal problems, however, the algorithm struggles to effectively manage step-size increases.<n>In this study, we reformulated the step-size correction strategy to overcome this limitation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multimodal optimization requires both exploration and exploitation. Exploration identifies promising attraction basins, while exploitation finds the best solutions within these basins. The balance between exploration and exploitation can be maintained by adjusting parameter settings. The population size adaptation covariance matrix adaption evolutionary strategy algorithm (PSA-CMA-ES) achieves this balance by dynamically adjusting population size. PSA-CMA-ES performs well on well-structured multimodal benchmark problems. In weakly structured multimodal problems, however, the algorithm struggles to effectively manage step-size increases, resulting in uncontrolled step-size blow-ups that impede convergence near the global optimum. In this study, we reformulated the step-size correction strategy to overcome this limitation. We analytically identified the cause of the step-size blow-up and demonstrate the existence of a significance level for population size change guiding a safe passage to step-size correction. These insights were incorporated to form the reformulation. Through computer experiments on two weakly structured multimodal benchmark problems, we evaluated the performance of the new approach and compared the results with the state-of-the-art algorithm. The improved algorithm successfully mitigates step-size blow-up, enabling a better balance between exploration and exploitation near the global optimum enhancing convergence.
Related papers
- AdaEvolve: Adaptive LLM Driven Zeroth-Order Optimization [61.535567824938205]
We introduce AdaEvolve, a framework that reformulates LLM-driven evolution as a hierarchical adaptive optimization problem.<n>AdaEvolve consistently outperforms the open-ended baselines across 185 different open-ended optimization problems.
arXiv Detail & Related papers (2026-02-23T18:45:31Z) - Iterative Refinement of Flow Policies in Probability Space for Online Reinforcement Learning [56.47948583452555]
We introduce the Stepwise Flow Policy (SWFP) framework, founded on the key insight that discretizing the flow matching inference process via a fixed-step Euler scheme aligns it with the variational Jordan-Kinderlehrer-Otto principle from optimal transport.<n>SWFP decomposes the global flow into a sequence of small, incremental transformations between proximate distributions.<n>This decomposition yields an efficient algorithm that fine-tunes pre-trained flows via a cascade of small flow blocks, offering significant advantages.
arXiv Detail & Related papers (2025-10-17T07:43:51Z) - LLM4CMO: Large Language Model-aided Algorithm Design for Constrained Multiobjective Optimization [54.35609820607923]
Large language models (LLMs) offer new opportunities for assisting with algorithm design.<n>We propose LLM4CMO, a novel CMOEA based on a dual-population, two-stage framework.<n>LLMs can serve as efficient co-designers in the development of complex evolutionary optimization algorithms.
arXiv Detail & Related papers (2025-08-16T02:00:57Z) - Efficient and Scalable Deep Reinforcement Learning for Mean Field Control Games [16.62770187749295]
Mean Field Control Games (MFCGs) provide a powerful theoretical framework for analyzing systems of infinitely many interacting agents.<n>This paper presents a scalable deep Reinforcement Learning (RL) approach to approximate equilibrium solutions of MFCGs.
arXiv Detail & Related papers (2024-12-28T02:04:53Z) - A Detection Region Method-Based Evolutionary Algorithm for Binary Constrained Multiobjective Optimization [9.025272135956236]
This paper proposes a novel algorithm called DRMCMO based on the detection region method.<n>In DRMCMO, detection regions dynamic monitor feasible solutions to enhance convergence, helping the population escape local optima.<n>We have modified three existing test suites to serve as benchmark test problems for CMOPs with binary constraints.
arXiv Detail & Related papers (2024-11-13T08:39:04Z) - Modified CMA-ES Algorithm for Multi-Modal Optimization: Incorporating Niching Strategies and Dynamic Adaptation Mechanism [0.03495246564946555]
This study modifies the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm for multi-modal optimization problems.
The enhancements focus on addressing the challenges of multiple global minima, improving the algorithm's ability to maintain diversity and explore complex fitness landscapes.
We incorporate niching strategies and dynamic adaptation mechanisms to refine the algorithm's performance in identifying and optimizing multiple global optima.
arXiv Detail & Related papers (2024-07-01T03:41:39Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - An Improved Artificial Fish Swarm Algorithm for Solving the Problem of
Investigation Path Planning [8.725702964289479]
We propose a chaotic artificial fish swarm algorithm based on multiple population differential evolution (DE-CAFSA)
We incorporate adaptive field of view and step size adjustments, replace random behavior with the 2-opt operation, and introduce chaos theory and sub-optimal solutions.
Experimental results demonstrate that DE-CAFSA outperforms other algorithms on various public datasets of different sizes.
arXiv Detail & Related papers (2023-10-20T09:35:51Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - Balancing Exploration and Exploitation for Solving Large-scale
Multiobjective Optimization via Attention Mechanism [18.852491892952514]
We propose a large-scale multiobjective optimization algorithm based on the attention mechanism, called (LMOAM)
The attention mechanism will assign a unique weight to each decision variable, and LMOAM will use this weight to strike a balance between exploration and exploitation from the decision variable level.
arXiv Detail & Related papers (2022-05-20T09:45:49Z) - Covariance Matrix Adaptation Evolution Strategy Assisted by Principal
Component Analysis [4.658166900129066]
We will use the dimensionality reduction method Principal component analysis (PCA) to reduce the dimension during the iteration of Covariance Matrix Adaptation Evolution Strategy (CMA-ES)
arXiv Detail & Related papers (2021-05-08T12:43:38Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - GACEM: Generalized Autoregressive Cross Entropy Method for Multi-Modal
Black Box Constraint Satisfaction [69.94831587339539]
We present a modified Cross-Entropy Method (CEM) that uses a masked auto-regressive neural network for modeling uniform distributions over the solution space.
Our algorithm is able to express complicated solution spaces, thus allowing it to track a variety of different solution regions.
arXiv Detail & Related papers (2020-02-17T20:21:20Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z) - Discrete Action On-Policy Learning with Action-Value Critic [72.20609919995086]
Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension.
We construct a critic to estimate action-value functions, apply it on correlated actions, and combine these critic estimated action values to control the variance of gradient estimation.
These efforts result in a new discrete action on-policy RL algorithm that empirically outperforms related on-policy algorithms relying on variance control techniques.
arXiv Detail & Related papers (2020-02-10T04:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.