Accounting for Gaussian Process Imprecision in Bayesian Optimization
- URL: http://arxiv.org/abs/2111.08299v1
- Date: Tue, 16 Nov 2021 08:45:39 GMT
- Title: Accounting for Gaussian Process Imprecision in Bayesian Optimization
- Authors: Julian Rodemann, Thomas Augustin
- Abstract summary: We study the effect of the Gaussian processes' prior specifications on classical BO's convergence.
We introduce PROBO as a generalization of BO that aims at rendering the method more robust towards prior mean parameter misspecification.
We test our approach against classical BO on a real-world problem from material science and observe PROBO to converge faster.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Bayesian optimization (BO) with Gaussian processes (GP) as surrogate models
is widely used to optimize analytically unknown and expensive-to-evaluate
functions. In this paper, we propose Prior-mean-RObust Bayesian Optimization
(PROBO) that outperforms classical BO on specific problems. First, we study the
effect of the Gaussian processes' prior specifications on classical BO's
convergence. We find the prior's mean parameters to have the highest influence
on convergence among all prior components. In response to this result, we
introduce PROBO as a generalization of BO that aims at rendering the method
more robust towards prior mean parameter misspecification. This is achieved by
explicitly accounting for GP imprecision via a prior near-ignorance model. At
the heart of this is a novel acquisition function, the generalized lower
confidence bound (GLCB). We test our approach against classical BO on a
real-world problem from material science and observe PROBO to converge faster.
Further experiments on multimodal and wiggly target functions confirm the
superiority of our method.
Related papers
- Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Provably Efficient Bayesian Optimization with Unknown Gaussian Process Hyperparameter Estimation [44.53678257757108]
We propose a new BO method that can sub-linearly converge to the objective function's global optimum.
Our method uses a multi-armed bandit technique (EXP3) to add random data points to the BO process.
We demonstrate empirically that our method outperforms existing approaches on various synthetic and real-world problems.
arXiv Detail & Related papers (2023-06-12T03:35:45Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Pre-training helps Bayesian optimization too [49.28382118032923]
We seek an alternative practice for setting functional priors.
In particular, we consider the scenario where we have data from similar functions that allow us to pre-train a tighter distribution a priori.
Our results show that our method is able to locate good hyper parameters at least 3 times more efficiently than the best competing methods.
arXiv Detail & Related papers (2022-07-07T04:42:54Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Pre-trained Gaussian Processes for Bayesian Optimization [24.730678780782647]
We propose a new pre-training based BO framework named HyperBO.
We show bounded posterior predictions and near-zero regrets for HyperBO without assuming the "ground truth" GP prior is known.
arXiv Detail & Related papers (2021-09-16T20:46:26Z) - How Bayesian Should Bayesian Optimisation Be? [0.024790788944106048]
We investigate whether a fully-Bayesian treatment of the Gaussian process hyperparameters in BO (FBBO) leads to improved optimisation performance.
We compare FBBO using three approximate inference schemes to the maximum likelihood approach, using the Expected Improvement (EI) and Upper Confidence Bound (UCB) acquisition functions.
We find that FBBO using EI with an ARD kernel leads to the best performance in the noise-free setting, with much less difference between combinations of BO components when the noise is increased.
arXiv Detail & Related papers (2021-05-03T14:28:11Z) - Bayesian Optimization with a Prior for the Optimum [41.41323474440455]
We introduce Bayesian Optimization with a Prior for the Optimum (BOPrO)
BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance.
We show that BOPrO is around 6.67x faster than state-of-the-art methods on a common suite of benchmarks.
arXiv Detail & Related papers (2020-06-25T17:49:24Z) - Randomised Gaussian Process Upper Confidence Bound for Bayesian
Optimisation [60.93091603232817]
We develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
This is done by sampling the exploration-exploitation trade-off parameter from a distribution.
We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret.
arXiv Detail & Related papers (2020-06-08T00:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.