CMA-ES with Radial Basis Function Surrogate for Black-Box Optimization
- URL: http://arxiv.org/abs/2505.16127v1
- Date: Thu, 22 May 2025 02:10:04 GMT
- Title: CMA-ES with Radial Basis Function Surrogate for Black-Box Optimization
- Authors: Farshid Farhadi Khouzani, Abdolreza Mirzaei, Paul La Plante, Laxmi Gewali,
- Abstract summary: We propose the adoption of surrogate models within the CMA-ES framework called CMA-SAO to develop an initial surrogate model.<n> Empirical validation reveals that CMA-SAO algorithm markedly diminishes the number of function evaluations in comparison to prevailing algorithms.
- Score: 1.581191445609191
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Evolutionary optimization algorithms often face defects and limitations that complicate the evolution processes or even prevent them from reaching the global optimum. A notable constraint pertains to the considerable quantity of function evaluations required to achieve the intended solution. This concern assumes heightened significance when addressing costly optimization problems. However, recent research has shown that integrating machine learning methods, specifically surrogate models, with evolutionary optimization can enhance various aspects of these algorithms. Among the evolutionary algorithms, the Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) is particularly favored. This preference is due to its use of Gaussian distribution for calculating evolution and its ability to adapt optimization parameters, which reduces the need for user intervention in adjusting initial parameters. In this research endeavor, we propose the adoption of surrogate models within the CMA-ES framework called CMA-SAO to develop an initial surrogate model that facilitates the adaptation of optimization parameters through the acquisition of pertinent information derived from the associated surrogate model. Empirical validation reveals that CMA-SAO algorithm markedly diminishes the number of function evaluations in comparison to prevailing algorithms, thereby providing a significant enhancement in operational efficiency.
Related papers
- Large Language Model Aided Multi-objective Evolutionary Algorithm: a Low-cost Adaptive Approach [4.442101733807905]
This study proposes a new framework that combines a large language model (LLM) with traditional evolutionary algorithms to enhance the algorithm's search capability and generalization performance.
We leverage an auxiliary evaluation function and automated prompt construction within the adaptive mechanism to flexibly adjust the utilization of the LLM.
arXiv Detail & Related papers (2024-10-03T08:37:02Z) - Model Uncertainty in Evolutionary Optimization and Bayesian Optimization: A Comparative Analysis [5.6787965501364335]
Black-box optimization problems are common in many real-world applications.
These problems require optimization through input-output interactions without access to internal workings.
Two widely used gradient-free optimization techniques are employed to address such challenges.
This paper aims to elucidate the similarities and differences in the utilization of model uncertainty between these two methods.
arXiv Detail & Related papers (2024-03-21T13:59:19Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Evolutionary Solution Adaption for Multi-Objective Metal Cutting Process
Optimization [59.45414406974091]
We introduce a framework for system flexibility that allows us to study the ability of an algorithm to transfer solutions from previous optimization tasks.
We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, that optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, that accommodates different possibilities that can be activated or deactivated.
Results show that adaption with standard NSGA-II greatly reduces the number of evaluations required for optimization to a target goal, while the proposed variants further improve the adaption costs.
arXiv Detail & Related papers (2023-05-31T12:07:50Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - A Data-Driven Evolutionary Transfer Optimization for Expensive Problems
in Dynamic Environments [9.098403098464704]
Data-driven, a.k.a. surrogate-assisted, evolutionary optimization has been recognized as an effective approach for tackling expensive black-box optimization problems.
This paper proposes a simple but effective transfer learning framework to empower data-driven evolutionary optimization to solve dynamic optimization problems.
Experiments on synthetic benchmark test problems and a real-world case study demonstrate the effectiveness of our proposed algorithm.
arXiv Detail & Related papers (2022-11-05T11:19:50Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Better call Surrogates: A hybrid Evolutionary Algorithm for
Hyperparameter optimization [18.359749929678635]
We propose a surrogate-assisted evolutionary algorithm (EA) for hyper parameter optimization of machine learning (ML) models.
The proposed STEADE model initially estimates the objective function landscape using RadialBasis Function, and then transfers the knowledge to an EA technique called Differential Evolution.
We empirically evaluate our model on the hyper parameter optimization problems as a part of the black box optimization challenge at NeurIPS 2020 and demonstrate the improvement brought about by STEADE over the vanilla EA.
arXiv Detail & Related papers (2020-12-11T16:19:59Z) - Enhanced Innovized Repair Operator for Evolutionary Multi- and
Many-objective Optimization [5.885238773559015]
"Innovization" is a task of learning common relationships among some or all of the Pareto-optimal (PO) solutions in optimisation problems.
Recent studies have shown that a chronological sequence of non-dominated solutions also possess salient patterns that can be used to learn problem features.
We propose a machine-learning- (ML-) assisted modelling approach that learns the modifications in design variables needed to advance population members towards the Pareto-optimal set.
arXiv Detail & Related papers (2020-11-21T10:29:15Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.