Bivariate Estimation-of-Distribution Algorithms Can Find an Exponential
Number of Optima
- URL: http://arxiv.org/abs/2310.04042v1
- Date: Fri, 6 Oct 2023 06:32:07 GMT
- Title: Bivariate Estimation-of-Distribution Algorithms Can Find an Exponential
Number of Optima
- Authors: Benjamin Doerr and Martin S. Krejca
- Abstract summary: We propose the test function EqualBlocksOneMax (EBOM) to support the study of how optimization algorithms handle large sets of optima.
We show that EBOM behaves very similarly to a theoretically ideal model for EBOM, which samples each of the exponentially many optima with the same maximal probability.
- Score: 12.009357100208353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Finding a large set of optima in a multimodal optimization landscape is a
challenging task. Classical population-based evolutionary algorithms typically
converge only to a single solution. While this can be counteracted by applying
niching strategies, the number of optima is nonetheless trivially bounded by
the population size. Estimation-of-distribution algorithms (EDAs) are an
alternative, maintaining a probabilistic model of the solution space instead of
a population. Such a model is able to implicitly represent a solution set far
larger than any realistic population size.
To support the study of how optimization algorithms handle large sets of
optima, we propose the test function EqualBlocksOneMax (EBOM). It has an easy
fitness landscape with exponentially many optima. We show that the bivariate
EDA mutual-information-maximizing input clustering, without any
problem-specific modification, quickly generates a model that behaves very
similarly to a theoretically ideal model for EBOM, which samples each of the
exponentially many optima with the same maximal probability. We also prove via
mathematical means that no univariate model can come close to having this
property: If the probability to sample an optimum is at least
inverse-polynomial, there is a Hamming ball of logarithmic radius such that,
with high probability, each sample is in this ball.
Related papers
- Runtime Analysis of a Multi-Valued Compact Genetic Algorithm on Generalized OneMax [2.07180164747172]
We provide a first runtime analysis of a generalized OneMax function.
We show that the r-cGA solves this r-valued OneMax problem efficiently.
At the end of experiments, we state one conjecture related to the expected runtime of another variant of multi-valued OneMax function.
arXiv Detail & Related papers (2024-04-17T10:40:12Z) - Approximating a RUM from Distributions on k-Slates [88.32814292632675]
We find a generalization-time algorithm that finds the RUM that best approximates the given distribution on average.
Our theoretical result can also be made practical: we obtain a that is effective and scales to real-world datasets.
arXiv Detail & Related papers (2023-05-22T17:43:34Z) - Selection of the Most Probable Best [2.1095005405219815]
We consider an expected-value ranking and selection (R&S) problem where all k solutions' simulation outputs depend on a common parameter whose uncertainty can be modeled by a distribution.
We define the most probable best (MPB) to be the solution that has the largest probability of being optimal with respect to the distribution.
We devise a series of algorithms that replace the unknown means in the optimality conditions with their estimates and prove the algorithms' sampling ratios achieve the conditions as the simulation budget increases.
arXiv Detail & Related papers (2022-07-15T15:27:27Z) - Efficient Approximation of Expected Hypervolume Improvement using
Gauss-Hermite Quadrature [0.0]
Gauss-Hermite quadrature is an accurate alternative to Monte Carlo for both independent and correlated predictive densities.
We show it can be an accurate alternative to Monte Carlo for both independent and correlated predictive densities.
arXiv Detail & Related papers (2022-06-15T22:09:48Z) - An Application of a Multivariate Estimation of Distribution Algorithm to
Cancer Chemotherapy [59.40521061783166]
Chemotherapy treatment for cancer is a complex optimisation problem with a large number of interacting variables and constraints.
We show that the more sophisticated algorithm would yield better performance on a complex problem like this.
We hypothesise that this is caused by the more sophisticated algorithm being impeded by the large number of interactions in the problem.
arXiv Detail & Related papers (2022-05-17T15:28:46Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Distributed stochastic proximal algorithm with random reshuffling for
non-smooth finite-sum optimization [28.862321453597918]
Non-smooth finite-sum minimization is a fundamental problem in machine learning.
This paper develops a distributed proximal-gradient algorithm with random reshuffling to solve the problem.
arXiv Detail & Related papers (2021-11-06T07:29:55Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Grouped Variable Selection with Discrete Optimization: Computational and
Statistical Perspectives [9.593208961737572]
We present a new algorithmic framework for grouped variable selection that is based on discrete mathematical optimization.
Our methodology covers both high-dimensional linear regression and non- additive modeling with sparse programming.
Our exact algorithm is based on a standalone branch-and-bound (BnB) framework, which can solve the associated mixed integer (MIP) problem to certified optimality.
arXiv Detail & Related papers (2021-04-14T19:21:59Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Non-Adaptive Adaptive Sampling on Turnstile Streams [57.619901304728366]
We give the first relative-error algorithms for column subset selection, subspace approximation, projective clustering, and volume on turnstile streams that use space sublinear in $n$.
Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model.
arXiv Detail & Related papers (2020-04-23T05:00:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.