Towards Practical Preferential Bayesian Optimization with Skew Gaussian
Processes
- URL: http://arxiv.org/abs/2302.01513v2
- Date: Mon, 12 Jun 2023 02:38:21 GMT
- Title: Towards Practical Preferential Bayesian Optimization with Skew Gaussian
Processes
- Authors: Shion Takeno, Masahiro Nomura, Masayuki Karasuyama
- Abstract summary: We study preferential Bayesian optimization (BO) where reliable feedback is limited to pairwise comparison called duels.
An important challenge in preferential BO, which uses the preferential Gaussian process (GP) model to represent flexible preference structure, is that the posterior distribution is a computationally intractable skew GP.
We develop a new method that achieves both high computational efficiency and low sample complexity, and then demonstrate its effectiveness through extensive numerical experiments.
- Score: 8.198195852439946
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We study preferential Bayesian optimization (BO) where reliable feedback is
limited to pairwise comparison called duels. An important challenge in
preferential BO, which uses the preferential Gaussian process (GP) model to
represent flexible preference structure, is that the posterior distribution is
a computationally intractable skew GP. The most widely used approach for
preferential BO is Gaussian approximation, which ignores the skewness of the
true posterior. Alternatively, Markov chain Monte Carlo (MCMC) based
preferential BO is also proposed. In this work, we first verify the accuracy of
Gaussian approximation, from which we reveal the critical problem that the
predictive probability of duels can be inaccurate. This observation motivates
us to improve the MCMC-based estimation for skew GP, for which we show the
practical efficiency of Gibbs sampling and derive the low variance MC
estimator. However, the computational time of MCMC can still be a bottleneck in
practice. Towards building a more practical preferential BO, we develop a new
method that achieves both high computational efficiency and low sample
complexity, and then demonstrate its effectiveness through extensive numerical
experiments.
Related papers
- Robust Bayesian Optimization via Localized Online Conformal Prediction [37.549297668783254]
We introduce localized online conformal prediction-based Bayesian optimization (LOCBO)
LOCBO calibrates the GP model through localized online conformal prediction (CP)
We provide theoretical performance guarantees for LOCBO's iterates that hold for the unobserved objective function.
arXiv Detail & Related papers (2024-11-26T12:45:54Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Provably Efficient Bayesian Optimization with Unknown Gaussian Process Hyperparameter Estimation [44.53678257757108]
We propose a new BO method that can sub-linearly converge to the objective function's global optimum.
Our method uses a multi-armed bandit technique (EXP3) to add random data points to the BO process.
We demonstrate empirically that our method outperforms existing approaches on various synthetic and real-world problems.
arXiv Detail & Related papers (2023-06-12T03:35:45Z) - Model-based Causal Bayesian Optimization [78.120734120667]
We propose model-based causal Bayesian optimization (MCBO)
MCBO learns a full system model instead of only modeling intervention-reward pairs.
Unlike in standard Bayesian optimization, our acquisition function cannot be evaluated in closed form.
arXiv Detail & Related papers (2022-11-18T14:28:21Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Posterior Refinement Improves Sample Efficiency in Bayesian Neural
Networks [27.11052209129402]
We experimentally show that the key to good MC-approximated predictive distributions is the quality of the approximate posterior itself.
We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.
arXiv Detail & Related papers (2022-05-20T09:24:39Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - What Are Bayesian Neural Network Posteriors Really Like? [63.950151520585024]
We show that Hamiltonian Monte Carlo can achieve significant performance gains over standard and deep ensembles.
We also show that deep distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.
arXiv Detail & Related papers (2021-04-29T15:38:46Z) - Approximate Bayesian inference from noisy likelihoods with Gaussian
process emulated MCMC [0.24275655667345403]
We model the log-likelihood function using a Gaussian process (GP)
The main methodological innovation is to apply this model to emulate the progression that an exact Metropolis-Hastings (MH) sampler would take.
The resulting approximate sampler is conceptually simple and sample-efficient.
arXiv Detail & Related papers (2021-04-08T17:38:02Z) - Preferential Bayesian optimisation with Skew Gaussian Processes [0.225596179391365]
We show that the true posterior distribution of the preference function is a Skew Gaussian Process (SkewGP)
We derive an efficient method to compute the exact SkewGP posterior and use it as surrogate model for PBO employing standard acquisition functions.
We also show that our framework can be extended to deal with mixed preferential-categorical BO.
arXiv Detail & Related papers (2020-08-15T08:23:17Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.