Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function
- URL: http://arxiv.org/abs/2210.06814v1
- Date: Thu, 13 Oct 2022 07:56:47 GMT
- Title: Accelerating the Evolutionary Algorithms by Gaussian Process Regression
with $\epsilon$-greedy acquisition function
- Authors: Rui Zhong, Enzhi Zhang, Masaharu Munetomo
- Abstract summary: We propose a novel method to estimate the elite individual to accelerate the convergence of optimization.
Our proposal has a broad prospect to estimate the elite individual and accelerate the convergence of optimization.
- Score: 2.7716102039510564
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a novel method to estimate the elite individual to
accelerate the convergence of optimization. Inspired by the Bayesian
Optimization Algorithm (BOA), the Gaussian Process Regression (GPR) is applied
to approximate the fitness landscape of original problems based on every
generation of optimization. And simple but efficient $\epsilon$-greedy
acquisition function is employed to find a promising solution in the surrogate
model. Proximity Optimal Principle (POP) states that well-performed solutions
have a similar structure, and there is a high probability of better solutions
existing around the elite individual. Based on this hypothesis, in each
generation of optimization, we replace the worst individual in Evolutionary
Algorithms (EAs) with the elite individual to participate in the evolution
process. To illustrate the scalability of our proposal, we combine our proposal
with the Genetic Algorithm (GA), Differential Evolution (DE), and CMA-ES.
Experimental results in CEC2013 benchmark functions show our proposal has a
broad prospect to estimate the elite individual and accelerate the convergence
of optimization.
Related papers
- Integrating Chaotic Evolutionary and Local Search Techniques in Decision Space for Enhanced Evolutionary Multi-Objective Optimization [1.8130068086063336]
This paper focuses on both Single-Objective Multi-Modal Optimization (SOMMOP) and Multi-Objective Optimization (MOO)
In SOMMOP, we integrate chaotic evolution with niching techniques, as well as Persistence-Based Clustering combined with Gaussian mutation.
For MOO, we extend these methods into a comprehensive framework that incorporates Uncertainty-Based Selection, Adaptive Tuning, and introduces a radius ( R ) concept in deterministic crowding.
arXiv Detail & Related papers (2024-11-12T15:18:48Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Combining Kernelized Autoencoding and Centroid Prediction for Dynamic
Multi-objective Optimization [3.431120541553662]
This paper proposes a unified paradigm, which combines the kernelized autoncoding evolutionary search and the centriod-based prediction.
The proposed method is compared with five state-of-the-art algorithms on a number of complex benchmark problems.
arXiv Detail & Related papers (2023-12-02T00:24:22Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Distributed Evolution Strategies for Black-box Stochastic Optimization [42.90600124972943]
This work concerns the evolutionary approaches to distributed black-box optimization.
Each worker can individually solve an approximation of the problem with algorithms.
We propose two alternative simulation schemes which significantly improve robustness of problems.
arXiv Detail & Related papers (2022-04-09T11:18:41Z) - Momentum Accelerates the Convergence of Stochastic AUPRC Maximization [80.8226518642952]
We study optimization of areas under precision-recall curves (AUPRC), which is widely used for imbalanced tasks.
We develop novel momentum methods with a better iteration of $O (1/epsilon4)$ for finding an $epsilon$stationary solution.
We also design a novel family of adaptive methods with the same complexity of $O (1/epsilon4)$, which enjoy faster convergence in practice.
arXiv Detail & Related papers (2021-07-02T16:21:52Z) - Better call Surrogates: A hybrid Evolutionary Algorithm for
Hyperparameter optimization [18.359749929678635]
We propose a surrogate-assisted evolutionary algorithm (EA) for hyper parameter optimization of machine learning (ML) models.
The proposed STEADE model initially estimates the objective function landscape using RadialBasis Function, and then transfers the knowledge to an EA technique called Differential Evolution.
We empirically evaluate our model on the hyper parameter optimization problems as a part of the black box optimization challenge at NeurIPS 2020 and demonstrate the improvement brought about by STEADE over the vanilla EA.
arXiv Detail & Related papers (2020-12-11T16:19:59Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.