$\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian
Optimization
- URL: http://arxiv.org/abs/2204.11051v1
- Date: Sat, 23 Apr 2022 11:07:13 GMT
- Title: $\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian
Optimization
- Authors: Carl Hvarfner, Danny Stoll, Artur Souza, Marius Lindauer, Frank
Hutter, Luigi Nardi
- Abstract summary: We propose $pi$BO, an acquisition function generalization which incorporates prior beliefs about the location of the optimum.
In contrast to previous approaches, $pi$BO is conceptually simple and can easily be integrated with existing libraries and many acquisition functions.
We also demonstrate that $pi$BO improves on the state-of-the-art performance for a popular deep learning task, with a 12.5 $times$ time-to-accuracy speedup over prominent BO approaches.
- Score: 40.30019289383378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization (BO) has become an established framework and popular
tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms.
While known for its sample-efficiency, vanilla BO can not utilize readily
available prior beliefs the practitioner has on the potential location of the
optimum. Thus, BO disregards a valuable source of information, reducing its
appeal to ML practitioners. To address this issue, we propose $\pi$BO, an
acquisition function generalization which incorporates prior beliefs about the
location of the optimum in the form of a probability distribution, provided by
the user. In contrast to previous approaches, $\pi$BO is conceptually simple
and can easily be integrated with existing libraries and many acquisition
functions. We provide regret bounds when $\pi$BO is applied to the common
Expected Improvement acquisition function and prove convergence at regular
rates independently of the prior. Further, our experiments show that $\pi$BO
outperforms competing approaches across a wide suite of benchmarks and prior
characteristics. We also demonstrate that $\pi$BO improves on the
state-of-the-art performance for a popular deep learning task, with a 12.5
$\times$ time-to-accuracy speedup over prominent BO approaches.
Related papers
- Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - A General Framework for User-Guided Bayesian Optimization [51.96352579696041]
We propose ColaBO, the first Bayesian-principled framework for prior beliefs beyond the typical kernel structure.
We empirically demonstrate ColaBO's ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading.
arXiv Detail & Related papers (2023-11-24T18:27:26Z) - qEUBO: A Decision-Theoretic Acquisition Function for Preferential
Bayesian Optimization [17.300690315775572]
We introduce the expected utility of the best option (qEUBO) as a novel acquisition function for PBO.
We show that qEUBO is one-step Bayes optimal and thus equivalent to the popular knowledge gradient acquisition function.
We demonstrate that qEUBO outperforms the state-of-the-art acquisition functions for PBO across many settings.
arXiv Detail & Related papers (2023-03-28T06:02:56Z) - Bayesian Optimization for Function Compositions with Applications to
Dynamic Pricing [0.0]
We propose a practical BO method of function compositions where the form of the composition is known but the constituent functions are expensive to evaluate.
We demonstrate a novel application to dynamic pricing in revenue management when the underlying demand function is expensive to evaluate.
arXiv Detail & Related papers (2023-03-21T15:45:06Z) - Multi-Fidelity Bayesian Optimization with Unreliable Information Sources [12.509709549771385]
We propose rMFBO (robust MFBO) to make GP-based MFBO schemes robust to the addition of unreliable information sources.
We demonstrate the effectiveness of the proposed methodology on a number of numerical benchmarks.
We expect rMFBO to be particularly useful to reliably include human experts with varying knowledge within BO processes.
arXiv Detail & Related papers (2022-10-25T11:47:33Z) - A General Recipe for Likelihood-free Bayesian Optimization [115.82591413062546]
We propose likelihood-free BO (LFBO) to extend BO to a broader class of models and utilities.
LFBO directly models the acquisition function without having to separately perform inference with a probabilistic surrogate model.
We show that computing the acquisition function in LFBO can be reduced to optimizing a weighted classification problem.
arXiv Detail & Related papers (2022-06-27T03:55:27Z) - Sparse Bayesian Optimization [16.867375370457438]
We present several regularization-based approaches that allow us to discover sparse and more interpretable configurations.
We propose a novel differentiable relaxation based on homotopy continuation that makes it possible to target sparsity.
We show that we are able to efficiently optimize for sparsity.
arXiv Detail & Related papers (2022-03-03T18:25:33Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.