On Steady-State Evolutionary Algorithms and Selective Pressure: Why
Inverse Rank-Based Allocation of Reproductive Trials is Best
- URL: http://arxiv.org/abs/2103.10394v1
- Date: Thu, 18 Mar 2021 17:27:05 GMT
- Title: On Steady-State Evolutionary Algorithms and Selective Pressure: Why
Inverse Rank-Based Allocation of Reproductive Trials is Best
- Authors: Dogan Corus and Andrei Lissovoi and Pietro S. Oliveto and Carsten Witt
- Abstract summary: We analyse the impact of the selective pressure for the global optimisation capabilities of steady-state EAs.
For the standard bimodal benchmark function twomax we rigorously prove that using uniform parent selection leads to exponentials with high probability to locate both optima.
On the other hand, we prove that selecting the worst individual as parent leads to efficient global optimisation with overwhelming probability for reasonable population sizes.
- Score: 9.290757451344673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We analyse the impact of the selective pressure for the global optimisation
capabilities of steady-state EAs. For the standard bimodal benchmark function
\twomax we rigorously prove that using uniform parent selection leads to
exponential runtimes with high probability to locate both optima for the
standard ($\mu$+1)~EA and ($\mu$+1)~RLS with any polynomial population sizes.
On the other hand, we prove that selecting the worst individual as parent leads
to efficient global optimisation with overwhelming probability for reasonable
population sizes. Since always selecting the worst individual may have
detrimental effects for escaping from local optima, we consider the performance
of stochastic parent selection operators with low selective pressure for a
function class called \textsc{TruncatedTwoMax} where one slope is shorter than
the other. An experimental analysis shows that the EAs equipped with inverse
tournament selection, where the loser is selected for reproduction and small
tournament sizes, globally optimise \textsc{TwoMax} efficiently and effectively
escape from local optima of \textsc{TruncatedTwoMax} with high probability.
Thus they identify both optima efficiently while uniform (or stronger)
selection fails in theory and in practice. We then show the power of inverse
selection on function classes from the literature where populations are
essential by providing rigorous proofs or experimental evidence that it
outperforms uniform selection equipped with or without a restart strategy. We
conclude the paper by confirming our theoretical insights with an empirical
analysis of the different selective pressures on standard benchmarks of the
classical MaxSat and Multidimensional Knapsack Problems.
Related papers
- Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Minimum variance threshold for epsilon-lexicase selection [0.7373617024876725]
Methods often rely on average error over the entire dataset as a criterion to select the parents.
We propose a new criteria that splits errors into two partitions that minimize the total variance within partitions.
Our results show a better performance of our approach compared to traditional epsilon-lexicase selection in the real-world datasets.
arXiv Detail & Related papers (2024-04-08T23:47:26Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Dual-Directed Algorithm Design for Efficient Pure Exploration [11.492736493413103]
We consider pure-exploration problems in the context of sequential adaptive experiments with a finite set of alternative options.
We derive a sufficient condition for optimality in terms of a notion of strong convergence to the optimal allocation of samples.
Our algorithm is optimal for $epsilon$-best-arm identification and thresholding bandit problems.
arXiv Detail & Related papers (2023-10-30T07:29:17Z) - qPOTS: Efficient batch multiobjective Bayesian optimization via Pareto optimal Thompson sampling [0.0]
A sample-efficient approach to solving multiobjective optimization is via process oracle (GP) surrogates and MOBOOTS$.
We propose a Thompson sampling (TS) based approach ($qtextttPOTS$)
$qtextttPOTS$ solves a cheap multiobjective optimization on the GP posteriors with evolutionary approaches.
arXiv Detail & Related papers (2023-10-24T12:35:15Z) - Optimize-via-Predict: Realizing out-of-sample optimality in data-driven
optimization [0.0]
We examine a formulation for data-driven optimization wherein the decision-maker is not privy to the true distribution.
We define a prescriptive solution as a decisionvendor rule mapping such a data set to decisions.
We present an optimization problem that would solve for such an out-of-sample optimal solution, and does so efficiently by a combination of sampling and bisection search algorithms.
arXiv Detail & Related papers (2023-09-20T08:48:50Z) - Bayesian Optimization with Conformal Prediction Sets [44.565812181545645]
Conformal prediction is an uncertainty quantification method with coverage guarantees even for misspecified models.
We propose conformal Bayesian optimization, which directs queries towards regions of search space where the model predictions have guaranteed validity.
In many cases we find that query coverage can be significantly improved without harming sample-efficiency.
arXiv Detail & Related papers (2022-10-22T17:01:05Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Optimizing Partial Area Under the Top-k Curve: Theory and Practice [151.5072746015253]
We develop a novel metric named partial Area Under the top-k Curve (AUTKC)
AUTKC has a better discrimination ability, and its Bayes optimal score function could give a correct top-K ranking with respect to the conditional probability.
We present an empirical surrogate risk minimization framework to optimize the proposed metric.
arXiv Detail & Related papers (2022-09-03T11:09:13Z) - Model Selection in Batch Policy Optimization [88.52887493684078]
We study the problem of model selection in batch policy optimization.
We identify three sources of error that any model selection algorithm should optimally trade-off in order to be competitive.
arXiv Detail & Related papers (2021-12-23T02:31:50Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.