Preferential Bayesian optimisation with Skew Gaussian Processes
- URL: http://arxiv.org/abs/2008.06677v3
- Date: Thu, 1 Apr 2021 17:46:11 GMT
- Title: Preferential Bayesian optimisation with Skew Gaussian Processes
- Authors: Alessio Benavoli, Dario Azzimonti, Dario Piga
- Abstract summary: We show that the true posterior distribution of the preference function is a Skew Gaussian Process (SkewGP)
We derive an efficient method to compute the exact SkewGP posterior and use it as surrogate model for PBO employing standard acquisition functions.
We also show that our framework can be extended to deal with mixed preferential-categorical BO.
- Score: 0.225596179391365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Preferential Bayesian optimisation (PBO) deals with optimisation problems
where the objective function can only be accessed via preference judgments,
such as "this is better than that" between two candidate solutions (like in A/B
tests or recommender systems). The state-of-the-art approach to PBO uses a
Gaussian process to model the preference function and a Bernoulli likelihood to
model the observed pairwise comparisons. Laplace's method is then employed to
compute posterior inferences and, in particular, to build an appropriate
acquisition function. In this paper, we prove that the true posterior
distribution of the preference function is a Skew Gaussian Process (SkewGP),
with highly skewed pairwise marginals and, thus, show that Laplace's method
usually provides a very poor approximation. We then derive an efficient method
to compute the exact SkewGP posterior and use it as surrogate model for PBO
employing standard acquisition functions (Upper Credible Bound, etc.). We
illustrate the benefits of our exact PBO-SkewGP in a variety of experiments, by
showing that it consistently outperforms PBO based on Laplace's approximation
both in terms of convergence speed and computational time. We also show that
our framework can be extended to deal with mixed preferential-categorical BO,
where binary judgments (valid or non-valid) together with preference judgments
are available.
Related papers
- Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Simulation Based Bayesian Optimization [0.6526824510982799]
This paper introduces Simulation Based Bayesian Optimization (SBBO) as a novel approach to optimizing acquisition functions.
SBBO allows the use of surrogate models tailored for spaces with discrete variables.
We demonstrate empirically the effectiveness of SBBO method using various choices of surrogate models.
arXiv Detail & Related papers (2024-01-19T16:56:11Z) - Predictive Modeling through Hyper-Bayesian Optimization [60.586813904500595]
We propose a novel way of integrating model selection and BO for the single goal of reaching the function optima faster.
The algorithm moves back and forth between BO in the model space and BO in the function space, where the goodness of the recommended model is captured.
In addition to improved sample efficiency, the framework outputs information about the black-box function.
arXiv Detail & Related papers (2023-08-01T04:46:58Z) - Towards Practical Preferential Bayesian Optimization with Skew Gaussian
Processes [8.198195852439946]
We study preferential Bayesian optimization (BO) where reliable feedback is limited to pairwise comparison called duels.
An important challenge in preferential BO, which uses the preferential Gaussian process (GP) model to represent flexible preference structure, is that the posterior distribution is a computationally intractable skew GP.
We develop a new method that achieves both high computational efficiency and low sample complexity, and then demonstrate its effectiveness through extensive numerical experiments.
arXiv Detail & Related papers (2023-02-03T03:02:38Z) - Relaxed Gaussian process interpolation: a goal-oriented approach to
Bayesian optimization [0.0]
This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling.
The method called relaxed Gaussian process (reGP) provides better predictive distributions in ranges of interest.
It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization.
arXiv Detail & Related papers (2022-06-07T06:26:46Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Probabilistic Permutation Graph Search: Black-Box Optimization for
Fairness in Ranking [53.94413894017409]
We present a novel way of representing permutation distributions, based on the notion of permutation graphs.
Similar to PL, our distribution representation, called PPG, can be used for black-box optimization of fairness.
arXiv Detail & Related papers (2022-04-28T20:38:34Z) - Gaussian Process Sampling and Optimization with Approximate Upper and
Lower Bounds [43.70206216468687]
Many functions have approximately-known upper and/or lower bounds, potentially aiding the modeling of such functions.
We introduce Gaussian process models for functions where such bounds are (approximately) known.
That is, we transform a GP model satisfying the given bounds, and then sample and weight functions from its posterior.
arXiv Detail & Related papers (2021-10-22T22:35:57Z) - Randomised Gaussian Process Upper Confidence Bound for Bayesian
Optimisation [60.93091603232817]
We develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
This is done by sampling the exploration-exploitation trade-off parameter from a distribution.
We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret.
arXiv Detail & Related papers (2020-06-08T00:28:41Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.