Learning How to Optimize Black-Box Functions With Extreme Limits on the
Number of Function Evaluations
- URL: http://arxiv.org/abs/2103.10321v1
- Date: Thu, 18 Mar 2021 15:30:15 GMT
- Title: Learning How to Optimize Black-Box Functions With Extreme Limits on the
Number of Function Evaluations
- Authors: Carlos Ansotegui, Meinolf Sellmann, Tapan Shah, Kevin Tierney
- Abstract summary: We consider black-box optimization in which only an extremely limited number of function evaluations, on the order of around 100, are affordable.
We propose an original method that uses established approaches to propose a set of points for each batch and then down-selects from these candidate points to the number of trials that can be run in parallel.
We achieve an average reduction of 50% of normalized cost, which is a highly significant improvement in performance.
- Score: 3.0969191504482243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider black-box optimization in which only an extremely limited number
of function evaluations, on the order of around 100, are affordable and the
function evaluations must be performed in even fewer batches of a limited
number of parallel trials. This is a typical scenario when optimizing variable
settings that are very costly to evaluate, for example in the context of
simulation-based optimization or machine learning hyperparameterization. We
propose an original method that uses established approaches to propose a set of
points for each batch and then down-selects from these candidate points to the
number of trials that can be run in parallel. The key novelty of our approach
lies in the introduction of a hyperparameterized method for down-selecting the
number of candidates to the allowed batch-size, which is optimized offline
using automated algorithm configuration. We tune this method for black box
optimization and then evaluate on classical black box optimization benchmarks.
Our results show that it is possible to learn how to combine evaluation points
suggested by highly diverse black box optimization methods conditioned on the
progress of the optimization. Compared with the state of the art in black box
minimization and various other methods specifically geared towards few-shot
minimization, we achieve an average reduction of 50\% of normalized cost, which
is a highly significant improvement in performance.
Related papers
- Batch Bayesian Optimization via Expected Subspace Improvement [0.0]
We propose a simple and efficient approach to extend Bayesian optimization to batch evaluation.
Our proposed approach can achieve near-linear speedup when compared with the sequential Bayesian optimization algorithm.
arXiv Detail & Related papers (2024-11-25T09:14:09Z) - Multi-fidelity Constrained Optimization for Stochastic Black Box
Simulators [1.6385815610837167]
We introduce the algorithm Scout-Nd (Stochastic Constrained Optimization for N dimensions) to tackle the issues mentioned earlier.
Scout-Nd efficiently estimates the gradient, reduces the noise of the estimator gradient, and applies multi-fidelity schemes to further reduce computational effort.
We validate our approach on standard benchmarks, demonstrating its effectiveness in optimizing parameters highlighting better performance compared to existing methods.
arXiv Detail & Related papers (2023-11-25T23:36:38Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - A unified surrogate-based scheme for black-box and preference-based
optimization [2.561649173827544]
We show that black-box and preference-based optimization problems are closely related and can be solved using the same family of approaches.
We propose the generalized Metric Response Surface (gMRS) algorithm, an optimization scheme that is a generalization of the popular MSRS framework.
arXiv Detail & Related papers (2022-02-03T08:47:54Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Divide and Learn: A Divide and Conquer Approach for Predict+Optimize [50.03608569227359]
The predict+optimize problem combines machine learning ofproblem coefficients with a optimization prob-lem that uses the predicted coefficients.
We show how to directlyexpress the loss of the optimization problem in terms of thepredicted coefficients as a piece-wise linear function.
We propose a novel divide and algorithm to tackle optimization problems without this restriction and predict itscoefficients using the optimization loss.
arXiv Detail & Related papers (2020-12-04T00:26:56Z) - Upper Trust Bound Feasibility Criterion for Mixed Constrained Bayesian
Optimization with Application to Aircraft Design [41.74498230885008]
We adapt the so-called super effcient global optimization algorithm to solve more accurately mixed constrained problems.
We show the good potential of the approach on a set of numerical experiments.
arXiv Detail & Related papers (2020-05-11T12:59:09Z) - Parallel Predictive Entropy Search for Multi-objective Bayesian
Optimization with Constraints [0.0]
Real-world problems often involve the optimization of several objectives under multiple constraints.
This article introduces PPESMOC, an information-based batch method for the simultaneous optimization of black-box functions.
Iteratively, PPESMOC selects a batch of input locations at which to evaluate the black-boxes.
arXiv Detail & Related papers (2020-04-01T17:37:58Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z) - Time-varying Gaussian Process Bandit Optimization with Non-constant
Evaluation Time [93.6788993843846]
We propose a novel time-varying Bayesian optimization algorithm that can effectively handle the non-constant evaluation time.
Our bound elucidates that a pattern of the evaluation time sequence can hugely affect the difficulty of the problem.
arXiv Detail & Related papers (2020-03-10T13:28:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.