Diversified Sampling for Batched Bayesian Optimization with
Determinantal Point Processes
- URL: http://arxiv.org/abs/2110.11665v1
- Date: Fri, 22 Oct 2021 08:51:28 GMT
- Title: Diversified Sampling for Batched Bayesian Optimization with
Determinantal Point Processes
- Authors: Elvis Nava, Mojm\'ir Mutn\'y, Andreas Krause
- Abstract summary: We introduce DPP-Batch Bayesian Optimization (DPP-BBO), a universal framework for inducing batch diversity in sampling based BO.
We illustrate this framework by formulating DPP-Thompson Sampling (DPP-TS) as a variant of the popular Thompson Sampling (TS) algorithm and introducing a Markov Chain Monte Carlo procedure to sample.
- Score: 48.09817971375995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Bayesian Optimization (BO) we study black-box function optimization with
noisy point evaluations and Bayesian priors. Convergence of BO can be greatly
sped up by batching, where multiple evaluations of the black-box function are
performed in a single round. The main difficulty in this setting is to propose
at the same time diverse and informative batches of evaluation points. In this
work, we introduce DPP-Batch Bayesian Optimization (DPP-BBO), a universal
framework for inducing batch diversity in sampling based BO by leveraging the
repulsive properties of Determinantal Point Processes (DPP) to naturally
diversify the batch sampling procedure. We illustrate this framework by
formulating DPP-Thompson Sampling (DPP-TS) as a variant of the popular Thompson
Sampling (TS) algorithm and introducing a Markov Chain Monte Carlo procedure to
sample from it. We then prove novel Bayesian simple regret bounds for both
classical batched TS as well as our counterpart DPP-TS, with the latter bound
being tighter. Our real-world, as well as synthetic, experiments demonstrate
improved performance of DPP-BBO over classical batching methods with Gaussian
process and Cox process models.
Related papers
- Batched Energy-Entropy acquisition for Bayesian Optimization [0.0]
We propose a statistical physics inspired acquisition function for BO with Gaussian processes that can handle batches.
BEEBO enables tight control of the explore-exploit trade-off of the optimization process and generalizes to heteroskedastic black-box problems.
arXiv Detail & Related papers (2024-10-11T13:39:47Z) - Batched Bayesian optimization with correlated candidate uncertainties [44.38372821900645]
We propose an acquisition strategy for discrete optimization motivated by pure exploitation, qPO (multipoint of Optimality)
We apply our method to the model-guided exploration of large chemical libraries and provide empirical evidence that it performs better than or on par with state-of-the-art methods in batched Bayesian optimization.
arXiv Detail & Related papers (2024-10-08T20:13:12Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Plug-and-Play split Gibbs sampler: embedding deep generative priors in
Bayesian inference [12.91637880428221]
This paper introduces a plug-and-play sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution.
It divides the challenging task of posterior sampling into two simpler sampling problems.
Its performance is compared to recent state-of-the-art optimization and sampling methods.
arXiv Detail & Related papers (2023-04-21T17:17:51Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Preferential Bayesian optimisation with Skew Gaussian Processes [0.225596179391365]
We show that the true posterior distribution of the preference function is a Skew Gaussian Process (SkewGP)
We derive an efficient method to compute the exact SkewGP posterior and use it as surrogate model for PBO employing standard acquisition functions.
We also show that our framework can be extended to deal with mixed preferential-categorical BO.
arXiv Detail & Related papers (2020-08-15T08:23:17Z) - Statistical Efficiency of Thompson Sampling for Combinatorial
Semi-Bandits [56.31950477139053]
We investigate multi-armed bandit with semi-bandit feedback (CMAB)
We analyze variants of the Combinatorial Thompson Sampling policy (CTS)
This last result gives us an alternative to the Efficient Sampling for Combinatorial Bandit policy (ESCB)
arXiv Detail & Related papers (2020-06-11T17:12:11Z) - Randomised Gaussian Process Upper Confidence Bound for Bayesian
Optimisation [60.93091603232817]
We develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
This is done by sampling the exploration-exploitation trade-off parameter from a distribution.
We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret.
arXiv Detail & Related papers (2020-06-08T00:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.