Voronoi Candidates for Bayesian Optimization
- URL: http://arxiv.org/abs/2402.04922v1
- Date: Wed, 7 Feb 2024 14:47:13 GMT
- Title: Voronoi Candidates for Bayesian Optimization
- Authors: Nathan Wycoff, John W. Smith, Annie S. Booth, Robert B. Gramacy
- Abstract summary: Many practical BO methods, particularly in high dimension, eschew a formal, continuous optimization of the acquisition function.
We propose to use candidates which lie on the boundary of the Voronoi tessellation of the current design points, so they are equidistant to two or more of them.
We discuss strategies for efficient implementation by directly sampling the Voronoi boundary without explicitly generating the tessellation.
- Score: 2.7309692684728617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization (BO) offers an elegant approach for efficiently
optimizing black-box functions. However, acquisition criteria demand their own
challenging inner-optimization, which can induce significant overhead. Many
practical BO methods, particularly in high dimension, eschew a formal,
continuous optimization of the acquisition function and instead search
discretely over a finite set of space-filling candidates. Here, we propose to
use candidates which lie on the boundary of the Voronoi tessellation of the
current design points, so they are equidistant to two or more of them. We
discuss strategies for efficient implementation by directly sampling the
Voronoi boundary without explicitly generating the tessellation, thus
accommodating large designs in high dimension. On a battery of test problems
optimized via Gaussian processes with expected improvement, our proposed
approach significantly improves the execution time of a multi-start continuous
search without a loss in accuracy.
Related papers
- An Adaptive Dimension Reduction Estimation Method for High-dimensional
Bayesian Optimization [6.79843988450982]
We propose a two-step optimization framework to extend BO to high-dimensional settings.
Our algorithm offers the flexibility to operate these steps either concurrently or in sequence.
Numerical experiments validate the efficacy of our method in challenging scenarios.
arXiv Detail & Related papers (2024-03-08T16:21:08Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Non-Convex Optimization with Certificates and Fast Rates Through Kernel
Sums of Squares [68.8204255655161]
We consider potentially non- optimized approximation problems.
In this paper, we propose an algorithm that achieves close to optimal a priori computational guarantees.
arXiv Detail & Related papers (2022-04-11T09:37:04Z) - High dimensional Bayesian Optimization Algorithm for Complex System in
Time Series [1.9371782627708491]
This paper presents a novel high dimensional Bayesian optimization algorithm.
Based on the time-dependent or dimension-dependent characteristics of the model, the proposed algorithm can reduce the dimension evenly.
To increase the final accuracy of the optimal solution, the proposed algorithm adds a local search based on a series of Adam-based steps at the final stage.
arXiv Detail & Related papers (2021-08-04T21:21:17Z) - BOSS: Bayesian Optimization over String Spaces [15.630421177117634]
This article develops a Bayesian optimization (BO) method which acts directly over raw strings.
It proposes the first uses of string kernels and genetic algorithms within BO loops.
arXiv Detail & Related papers (2020-10-02T13:18:27Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees [28.46586066038317]
We provide the first efficient implementation of general multi-stepahead look Bayesian optimization.
Instead of solving these problems in a nested way, we equivalently optimize all decision variables in the full tree jointly.
We demonstrate that multistep expected improvement is tractable and exhibits performance superior to existing methods on a wide range of benchmarks.
arXiv Detail & Related papers (2020-06-29T02:17:18Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.