EXPObench: Benchmarking Surrogate-based Optimisation Algorithms on
Expensive Black-box Functions
- URL: http://arxiv.org/abs/2106.04618v1
- Date: Tue, 8 Jun 2021 18:17:42 GMT
- Title: EXPObench: Benchmarking Surrogate-based Optimisation Algorithms on
Expensive Black-box Functions
- Authors: Laurens Bliek, Arthur Guijt, Rickard Karlsson, Sicco Verwer, Mathijs
de Weerdt
- Abstract summary: We provide an extensive comparison of six different surrogate algorithms on four expensive optimisation problems from different real-life applications.
This has led to new insights regarding the relative importance of exploration, the evaluation time of the objective, and the used model.
We make the algorithms and benchmark problem instances publicly available, contributing to more uniform analysis of surrogate algorithms.
- Score: 4.8980686156238535
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Surrogate algorithms such as Bayesian optimisation are especially designed
for black-box optimisation problems with expensive objectives, such as
hyperparameter tuning or simulation-based optimisation. In the literature,
these algorithms are usually evaluated with synthetic benchmarks which are well
established but have no expensive objective, and only on one or two real-life
applications which vary wildly between papers. There is a clear lack of
standardisation when it comes to benchmarking surrogate algorithms on
real-life, expensive, black-box objective functions. This makes it very
difficult to draw conclusions on the effect of algorithmic contributions. A new
benchmark library, EXPObench, provides first steps towards such a
standardisation. The library is used to provide an extensive comparison of six
different surrogate algorithms on four expensive optimisation problems from
different real-life applications. This has led to new insights regarding the
relative importance of exploration, the evaluation time of the objective, and
the used model. A further contribution is that we make the algorithms and
benchmark problem instances publicly available, contributing to more uniform
analysis of surrogate algorithms. Most importantly, we include the performance
of the six algorithms on all evaluated problem instances. This results in a
unique new dataset that lowers the bar for researching new methods as the
number of expensive evaluations required for comparison is significantly
reduced.
Related papers
- Speeding-up Evolutionary Algorithms to solve Black-Box Optimization
Problems [0.0]
Population-based evolutionary algorithms are often considered when approaching computationally expensive black-box optimization problems.
We propose a technique capable of choosing an appropriate approximate function cost during the execution of the optimization algorithm.
The proposal finds the minimum evaluation cost at which the solutions are still properly ranked, and consequently, more evaluations can be computed in the same amount of time with minimal accuracy loss.
arXiv Detail & Related papers (2023-09-23T11:54:46Z) - Equitable and Fair Performance Evaluation of Whale Optimization
Algorithm [4.0814527055582746]
It is essential that all algorithms are exhaustively, somewhat, and intelligently evaluated.
evaluating the effectiveness of optimization algorithms equitably and fairly is not an easy process for various reasons.
arXiv Detail & Related papers (2023-09-04T06:32:02Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - Fast Bayesian Optimization of Needle-in-a-Haystack Problems using
Zooming Memory-Based Initialization [73.96101108943986]
A Needle-in-a-Haystack problem arises when there is an extreme imbalance of optimum conditions relative to the size of the dataset.
We present a Zooming Memory-Based Initialization algorithm that builds on conventional Bayesian optimization principles.
arXiv Detail & Related papers (2022-08-26T23:57:41Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - On the Assessment of Benchmark Suites for Algorithm Comparison [7.501426386641256]
We show that most benchmark functions of BBOB suite have high difficulty levels (compared to the optimization algorithms) and low discrimination.
We discuss potential uses of IRT in benchmarking, including its use to improve the design of benchmark suites.
arXiv Detail & Related papers (2021-04-15T11:20:11Z) - Benchmarking Simulation-Based Inference [5.3898004059026325]
Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods.
We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms.
We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency.
arXiv Detail & Related papers (2021-01-12T18:31:22Z) - Continuous surrogate-based optimization algorithms are well-suited for
expensive discrete problems [9.655888042539495]
We present empirical evidence showing that the use of continuous surrogate models displays competitive performance against state-of-the-art discrete surrogate-based methods.
Our experiments on different discrete structures and time constraints also give more insight into which algorithms work well on which type of problem.
arXiv Detail & Related papers (2020-11-06T15:27:45Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.