Improving LSHADE by means of a pre-screening mechanism
- URL: http://arxiv.org/abs/2204.04105v2
- Date: Mon, 11 Apr 2022 14:27:13 GMT
- Title: Improving LSHADE by means of a pre-screening mechanism
- Authors: Mateusz Zaborski and Jacek Ma\'ndziuk
- Abstract summary: The paper introduces an extension to the well-known LSHADE algorithm in the form of a pre-screening mechanism (psLSHADE)
The proposed pre-screening relies on the three following components: a specific initial sampling procedure, an archive of samples, and a global linear meta-model of a fitness function.
The performance of psLSHADE is evaluated using the CEC2021 benchmark in an expensive scenario with an optimization budget of 102-104 FFEs per dimension.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Evolutionary algorithms have proven to be highly effective in continuous
optimization, especially when numerous fitness function evaluations (FFEs) are
possible. In certain cases, however, an expensive optimization approach (i.e.
with relatively low number of FFEs) must be taken, and such a setting is
considered in this work. The paper introduces an extension to the well-known
LSHADE algorithm in the form of a pre-screening mechanism (psLSHADE). The
proposed pre-screening relies on the three following components: a specific
initial sampling procedure, an archive of samples, and a global linear
meta-model of a fitness function that consists of 6 independent transformations
of variables. The pre-screening mechanism preliminary assesses the trial
vectors and designates the best one of them for further evaluation with the
fitness function. The performance of psLSHADE is evaluated using the CEC2021
benchmark in an expensive scenario with an optimization budget of 10^2-10^4
FFEs per dimension. We compare psLSHADE with the baseline LSHADE method and the
MadDE algorithm. The results indicate that with restricted optimization budgets
psLSHADE visibly outperforms both competitive algorithms. In addition, the use
of the pre-screening mechanism results in faster population convergence of
psLSHADE compared to LSHADE.
Related papers
- A Multi-operator Ensemble LSHADE with Restart and Local Search Mechanisms for Single-objective Optimization [0.0]
mLSHADE-RL is an enhanced version of LSHADE-cnEpSin, one of the winners of the CEC 2017 competition in single-objective optimization.
Three mutation strategies such as DE/current-to-pbest-weight/1 with archive, DE/current-to-pbest/1 without archive, and DE/current-to-ordpbest-weight/1 are integrated in the original LSHADE-cnEpSin.
LSHADE-cnEpSin is tested on 30 dimensions in the CEC 2024 competition on single objective bound constrained optimization.
arXiv Detail & Related papers (2024-09-24T11:49:08Z) - e-COP : Episodic Constrained Optimization of Policies [12.854752753529151]
We present the first policy optimization algorithm for constrained Reinforcement Learning (RL) in episodic (finite horizon) settings.
We show that our algorithm has similar or better performance than SoTA (non-episodic) algorithms adapted for the episodic setting.
arXiv Detail & Related papers (2024-06-13T20:12:09Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Convergence Rate Analysis for Optimal Computing Budget Allocation
Algorithms [1.713291434132985]
Ordinal optimization (OO) is a widely-studied technique for optimizing discrete-event dynamic systems.
A well-known method in OO is the optimal computing budget allocation (OCBA)
In this paper, we investigate two popular OCBA algorithms.
arXiv Detail & Related papers (2022-11-27T04:55:40Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization [39.824086260578646]
This paper presents a novel trusted-maximizers entropy search (TES) acquisition function.
It measures how much an input contributes to the information gain on a query over a finite set of trusted maximizers.
arXiv Detail & Related papers (2021-07-30T07:25:07Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.