Batch Bayesian optimisation via density-ratio estimation with guarantees
- URL: http://arxiv.org/abs/2209.10715v1
- Date: Thu, 22 Sep 2022 00:42:18 GMT
- Title: Batch Bayesian optimisation via density-ratio estimation with guarantees
- Authors: Rafael Oliveira, Louis Tiao and Fabio Ramos
- Abstract summary: We present a theoretical analysis of BORE's regret and an extension of the algorithm with improved uncertainty estimates.
We also show that BORE can be naturally extended to a batch optimisation setting by recasting the problem as approximate Bayesian inference.
- Score: 26.052368583196426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimisation (BO) algorithms have shown remarkable success in
applications involving expensive black-box functions. Traditionally BO has been
set as a sequential decision-making process which estimates the utility of
query points via an acquisition function and a prior over functions, such as a
Gaussian process. Recently, however, a reformulation of BO via density-ratio
estimation (BORE) allowed reinterpreting the acquisition function as a
probabilistic binary classifier, removing the need for an explicit prior over
functions and increasing scalability. In this paper, we present a theoretical
analysis of BORE's regret and an extension of the algorithm with improved
uncertainty estimates. We also show that BORE can be naturally extended to a
batch optimisation setting by recasting the problem as approximate Bayesian
inference. The resulting algorithm comes equipped with theoretical performance
guarantees and is assessed against other batch BO baselines in a series of
experiments.
Related papers
- Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Stochastic Bayesian Optimization with Unknown Continuous Context
Distribution via Kernel Density Estimation [28.413085548038932]
We propose two algorithms that employ kernel density estimation to learn the probability density function (PDF) of continuous context variable online.
Theoretical results demonstrate that both algorithms have sub-linear Bayesian cumulative regret on the expectation objective.
arXiv Detail & Related papers (2023-12-16T11:32:28Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Batch Bayesian Optimization via Particle Gradient Flows [0.5735035463793008]
We show how to find global optima of objective functions which are only available as a black-box or are expensive to evaluate.
We construct a new function based on multipoint expected probability which is over the space of probability measures.
arXiv Detail & Related papers (2022-09-10T18:10:15Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Trusted-Maximizers Entropy Search for Efficient Bayesian Optimization [39.824086260578646]
This paper presents a novel trusted-maximizers entropy search (TES) acquisition function.
It measures how much an input contributes to the information gain on a query over a finite set of trusted maximizers.
arXiv Detail & Related papers (2021-07-30T07:25:07Z) - BORE: Bayesian Optimization by Density-Ratio Estimation [34.22533785573784]
We cast the expected improvement (EI) function as a binary classification problem, building on the link between class-probability estimation and density-ratio estimation.
This reformulation provides numerous advantages, not least in terms of versatility, and scalability.
arXiv Detail & Related papers (2021-02-17T20:04:11Z) - Sub-linear Regret Bounds for Bayesian Optimisation in Unknown Search
Spaces [63.22864716473051]
We propose a novel BO algorithm which expands (and shifts) the search space over iterations.
We show theoretically that for both our algorithms, the cumulative regret grows at sub-linear rates.
arXiv Detail & Related papers (2020-09-05T14:24:40Z) - Randomised Gaussian Process Upper Confidence Bound for Bayesian
Optimisation [60.93091603232817]
We develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
This is done by sampling the exploration-exploitation trade-off parameter from a distribution.
We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret.
arXiv Detail & Related papers (2020-06-08T00:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.