Diversified Sampling for Batched Bayesian Optimization with
Determinantal Point Processes
- URL: http://arxiv.org/abs/2110.11665v1
- Date: Fri, 22 Oct 2021 08:51:28 GMT
- Title: Diversified Sampling for Batched Bayesian Optimization with
Determinantal Point Processes
- Authors: Elvis Nava, Mojm\'ir Mutn\'y, Andreas Krause
- Abstract summary: We introduce DPP-Batch Bayesian Optimization (DPP-BBO), a universal framework for inducing batch diversity in sampling based BO.
We illustrate this framework by formulating DPP-Thompson Sampling (DPP-TS) as a variant of the popular Thompson Sampling (TS) algorithm and introducing a Markov Chain Monte Carlo procedure to sample.
- Score: 48.09817971375995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Bayesian Optimization (BO) we study black-box function optimization with
noisy point evaluations and Bayesian priors. Convergence of BO can be greatly
sped up by batching, where multiple evaluations of the black-box function are
performed in a single round. The main difficulty in this setting is to propose
at the same time diverse and informative batches of evaluation points. In this
work, we introduce DPP-Batch Bayesian Optimization (DPP-BBO), a universal
framework for inducing batch diversity in sampling based BO by leveraging the
repulsive properties of Determinantal Point Processes (DPP) to naturally
diversify the batch sampling procedure. We illustrate this framework by
formulating DPP-Thompson Sampling (DPP-TS) as a variant of the popular Thompson
Sampling (TS) algorithm and introducing a Markov Chain Monte Carlo procedure to
sample from it. We then prove novel Bayesian simple regret bounds for both
classical batched TS as well as our counterpart DPP-TS, with the latter bound
being tighter. Our real-world, as well as synthetic, experiments demonstrate
improved performance of DPP-BBO over classical batching methods with Gaussian
process and Cox process models.
Related papers
- Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Plug-and-Play split Gibbs sampler: embedding deep generative priors in
Bayesian inference [12.91637880428221]
This paper introduces a plug-and-play sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution.
It divides the challenging task of posterior sampling into two simpler sampling problems.
Its performance is compared to recent state-of-the-art optimization and sampling methods.
arXiv Detail & Related papers (2023-04-21T17:17:51Z) - Fast post-process Bayesian inference with Variational Sparse Bayesian Quadrature [13.36200518068162]
We propose the framework of post-process Bayesian inference as a means to obtain a quick posterior approximation from existing target density evaluations.
Within this framework, we introduce Variational Sparse Bayesian Quadrature (VSBQ), a method for post-process approximate inference for models with black-box and potentially noisy likelihoods.
We validate our method on challenging synthetic scenarios and real-world applications from computational neuroscience.
arXiv Detail & Related papers (2023-03-09T13:58:35Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Gaussian Process Sampling and Optimization with Approximate Upper and
Lower Bounds [43.70206216468687]
Many functions have approximately-known upper and/or lower bounds, potentially aiding the modeling of such functions.
We introduce Gaussian process models for functions where such bounds are (approximately) known.
That is, we transform a GP model satisfying the given bounds, and then sample and weight functions from its posterior.
arXiv Detail & Related papers (2021-10-22T22:35:57Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Preferential Bayesian optimisation with Skew Gaussian Processes [0.225596179391365]
We show that the true posterior distribution of the preference function is a Skew Gaussian Process (SkewGP)
We derive an efficient method to compute the exact SkewGP posterior and use it as surrogate model for PBO employing standard acquisition functions.
We also show that our framework can be extended to deal with mixed preferential-categorical BO.
arXiv Detail & Related papers (2020-08-15T08:23:17Z) - Statistical Efficiency of Thompson Sampling for Combinatorial
Semi-Bandits [56.31950477139053]
We investigate multi-armed bandit with semi-bandit feedback (CMAB)
We analyze variants of the Combinatorial Thompson Sampling policy (CTS)
This last result gives us an alternative to the Efficient Sampling for Combinatorial Bandit policy (ESCB)
arXiv Detail & Related papers (2020-06-11T17:12:11Z) - Randomised Gaussian Process Upper Confidence Bound for Bayesian
Optimisation [60.93091603232817]
We develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
This is done by sampling the exploration-exploitation trade-off parameter from a distribution.
We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret.
arXiv Detail & Related papers (2020-06-08T00:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.