Robust Differential Evolution via Nonlinear Population Size Reduction and Adaptive Restart: The ARRDE Algorithm
- URL: http://arxiv.org/abs/2511.18429v1
- Date: Sun, 23 Nov 2025 12:50:25 GMT
- Title: Robust Differential Evolution via Nonlinear Population Size Reduction and Adaptive Restart: The ARRDE Algorithm
- Authors: Khoirul Faiq Muzakka, Ahsani Hafizhu Shali, Haris Suhendar, Sören Möller, Martin Finsterbusch,
- Abstract summary: ARRDE builds upon the LSHADE algorithm, incorporates key mechanisms from jSO, and introduces a nonlinear population-size reduction strategy.<n>ARRDE consistently demonstrates top-tier performance, ranking first across all benchmark suites considered.
- Score: 0.010411372746649314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study is motivated by a robustness issue in numerical optimization of bound-constrained problems: many algorithms that perform well on a particular benchmark suite, such as the IEEE CEC2017 problems, struggle to maintain the same level of performance when applied to other suites that differ in dimensionality, landscape complexity, or the maximum number of function evaluations ($N_{\text{max}}$). To address this, we propose the Adaptive Restart-Refine Differential Evolution (ARRDE) algorithm, a new variant of Differential Evolution (DE). ARRDE builds upon the LSHADE algorithm, incorporates key mechanisms from jSO, and introduces a nonlinear population-size reduction strategy combined with an adaptive restart-refine mechanism. We evaluate ARRDE on five benchmark suites (CEC2011, CEC2017, CEC2019, CEC2020, and CEC2022) which, to the best of our knowledge, constitutes the most extensive experimental study to date in the context of algorithmic comparison, as most prior works consider only one or two suites. This broad evaluation enables a rigorous assessment of generalization across markedly different problem characteristics. To further support fair cross-suite comparisons, we also introduce a bounded accuracy-based scoring metric derived from relative error. Using both rank-based and accuracy-based metrics, and comparing against algorithms that perform strongly on CEC2017 (e.g., jSO and LSHADE-cnEpSin) as well as those that excel on CEC2020 (e.g., j2020 and NLSHADE-RSP), ARRDE consistently demonstrates top-tier performance, ranking first across all benchmark suites considered. These results highlight ARRDE's robustness and its superior generalization capability.
Related papers
- Principled Algorithms for Optimizing Generalized Metrics in Binary Classification [53.604375124674796]
We introduce principled algorithms for optimizing generalized metrics, supported by $H$-consistency and finite-sample generalization bounds.<n>Our approach reformulates metric optimization as a generalized cost-sensitive learning problem.<n>We develop new algorithms, METRO, with strong theoretical performance guarantees.
arXiv Detail & Related papers (2025-12-29T01:33:42Z) - SEB-ChOA: An Improved Chimp Optimization Algorithm Using Spiral Exploitation Behavior [22.87976383190481]
The chimp optimization algorithm (ChOA) is a nature-inspired algorithm that imitates chimpanzees' individual intelligence and hunting behaviors.<n>This paper proposes six spiral functions and introduces two novel hybrid spiral functions (SEB-ChOA) to address these deficiencies.<n>The performance of SEB-ChOA is evaluated on 23 standard benchmarks, 20 benchmarks of the IEEE CEC-2005 test suite, 10 cases from the IEEE CEC06-2019 test suite, and 12 constrained real-world engineering problems.
arXiv Detail & Related papers (2025-11-26T23:46:29Z) - QUASAR: An Evolutionary Algorithm to Accelerate High-Dimensional Optimization [0.0]
This paper introduces Quasi-Adaptive Search with Asymptotic Reinitialization (QUASAR)<n>QUASAR is an evolutionary algorithm to accelerate convergence in complex, non-differentiable problems afflicted by the curse of dimensionality.
arXiv Detail & Related papers (2025-11-17T19:02:31Z) - NDCG-Consistent Softmax Approximation with Accelerated Convergence [67.10365329542365]
We propose novel loss formulations that align directly with ranking metrics.<n>We integrate the proposed RG losses with the highly efficient Alternating Least Squares (ALS) optimization method.<n> Empirical evaluations on real-world datasets demonstrate that our approach achieves comparable or superior ranking performance.
arXiv Detail & Related papers (2025-06-11T06:59:17Z) - A Benchmark for Maximum Cut: Towards Standardization of the Evaluation of Learned Heuristics for Combinatorial Optimization [12.016449555335976]
We propose an open-source benchmark suite MaxCut-Bench dedicated to the NP-hard Maximum Cut problem in both its weighted and unweighted variants.
We use the benchmark in an attempt to systematically corroborate or reproduce the results of several, popular learning-based approaches.
Our results show that several of the learneds fail to outperform a naive greedy algorithm, and that only one of them consistently outperforms Tabu Search.
arXiv Detail & Related papers (2024-06-14T19:44:23Z) - Efficient Convex Algorithms for Universal Kernel Learning [46.573275307034336]
An ideal set of kernels should: admit a linear parameterization (for tractability); dense in the set of all kernels (for accuracy)
Previous algorithms for optimization of kernels were limited to classification and relied on computationally complex Semidefinite Programming (SDP) algorithms.
We propose a SVD-QCQPQP algorithm which dramatically reduces the computational complexity as compared with previous SDP-based approaches.
arXiv Detail & Related papers (2023-04-15T04:57:37Z) - Regularization and Optimization in Model-Based Clustering [4.096453902709292]
k-means algorithm variants essentially fit a mixture of identical spherical Gaussians to data that vastly deviates from such a distribution.
We develop more effective optimization algorithms for general GMMs, and we combine these algorithms with regularization strategies that avoid overfitting.
These results shed new light on the current status quo between GMM and k-means methods and suggest the more frequent use of general GMMs for data exploration.
arXiv Detail & Related papers (2023-02-05T18:22:29Z) - Evolving Pareto-Optimal Actor-Critic Algorithms for Generalizability and
Stability [67.8426046908398]
Generalizability and stability are two key objectives for operating reinforcement learning (RL) agents in the real world.
This paper presents MetaPG, an evolutionary method for automated design of actor-critic loss functions.
arXiv Detail & Related papers (2022-04-08T20:46:16Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - On the Assessment of Benchmark Suites for Algorithm Comparison [7.501426386641256]
We show that most benchmark functions of BBOB suite have high difficulty levels (compared to the optimization algorithms) and low discrimination.
We discuss potential uses of IRT in benchmarking, including its use to improve the design of benchmark suites.
arXiv Detail & Related papers (2021-04-15T11:20:11Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.