Combining Genetic Programming and Particle Swarm Optimization to
Simplify Rugged Landscapes Exploration
- URL: http://arxiv.org/abs/2206.03241v1
- Date: Tue, 7 Jun 2022 12:55:04 GMT
- Title: Combining Genetic Programming and Particle Swarm Optimization to
Simplify Rugged Landscapes Exploration
- Authors: Gloria Pietropolli, Giuliamaria Menara, Mauro Castelli
- Abstract summary: We propose a novel method for constructing a smooth surrogate model of the original function.
The proposed algorithm, called the GP-FST-PSO Surrogate Model, achieves satisfactory results in both the search for the global optimum and the production of a visual approximation of the original benchmark function.
- Score: 7.25130576615102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most real-world optimization problems are difficult to solve with traditional
statistical techniques or with metaheuristics. The main difficulty is related
to the existence of a considerable number of local optima, which may result in
the premature convergence of the optimization process. To address this problem,
we propose a novel heuristic method for constructing a smooth surrogate model
of the original function. The surrogate function is easier to optimize but
maintains a fundamental property of the original rugged fitness landscape: the
location of the global optimum. To create such a surrogate model, we consider a
linear genetic programming approach enhanced by a self-tuning fitness function.
The proposed algorithm, called the GP-FST-PSO Surrogate Model, achieves
satisfactory results in both the search for the global optimum and the
production of a visual approximation of the original benchmark function (in the
2-dimensional case).
Related papers
- Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Generative Adversarial Model-Based Optimization via Source Critic Regularization [25.19579059511105]
We propose generative adversarial model-based optimization using adaptive source critic regularization (aSCR)
ASCR constrains the optimization trajectory to regions of the design space where the surrogate function is reliable.
We show how leveraging aSCR with standard Bayesian optimization outperforms existing methods on a suite of offline generative design tasks.
arXiv Detail & Related papers (2024-02-09T16:43:57Z) - Pseudo-Bayesian Optimization [7.556071491014536]
We study an axiomatic framework that elicits the minimal requirements to guarantee black-box optimization convergence.
We show how using simple local regression, and a suitable "randomized prior" construction to quantify uncertainty, not only guarantees convergence but also consistently outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2023-10-15T07:55:28Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Multi-objective robust optimization using adaptive surrogate models for
problems with mixed continuous-categorical parameters [0.0]
Robust design optimization is traditionally considered when uncertainties are mainly affecting the objective function.
The resulting nested optimization problem may be solved using a general-purpose solver, herein the non-dominated sorting genetic algorithm (NSGA-II)
The proposed approach consists of sequentially carrying out NSGA-II while using an adaptively built Kriging model to estimate the quantiles.
arXiv Detail & Related papers (2022-03-03T20:23:18Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Real-Time Optimization Meets Bayesian Optimization and Derivative-Free
Optimization: A Tale of Modifier Adaptation [0.0]
This paper investigates a new class of modifier-adaptation schemes to overcome plant-model mismatch in real-time optimization of uncertain processes.
The proposed schemes embed a physical model and rely on trust-region ideas to minimize risk during the exploration.
The benefits of using an acquisition function, knowing the process noise level, or specifying a nominal process model are illustrated.
arXiv Detail & Related papers (2020-09-18T12:57:17Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Bilevel Optimization for Differentially Private Optimization in Energy
Systems [53.806512366696275]
This paper studies how to apply differential privacy to constrained optimization problems whose inputs are sensitive.
The paper shows that, under a natural assumption, a bilevel model can be solved efficiently for large-scale nonlinear optimization problems.
arXiv Detail & Related papers (2020-01-26T20:15:28Z) - Finding Optimal Points for Expensive Functions Using Adaptive RBF-Based
Surrogate Model Via Uncertainty Quantification [11.486221800371919]
We propose a novel global optimization framework using adaptive Radial Basis Functions (RBF) based surrogate model via uncertainty quantification.
It first employs an RBF-based Bayesian surrogate model to approximate the true function, where the parameters of the RBFs can be adaptively estimated and updated each time a new point is explored.
It then utilizes a model-guided selection criterion to identify a new point from a candidate set for function evaluation.
arXiv Detail & Related papers (2020-01-19T16:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.