Probability Distribution of Hypervolume Improvement in Bi-objective Bayesian Optimization
- URL: http://arxiv.org/abs/2205.05505v3
- Date: Mon, 6 May 2024 11:11:18 GMT
- Title: Probability Distribution of Hypervolume Improvement in Bi-objective Bayesian Optimization
- Authors: Hao Wang, Kaifeng Yang, Michael Affenzeller,
- Abstract summary: Hypervolume improvement (HVI) is commonly employed in multi-objective Bayesian optimization algorithms.
We provide the exact expression of HVI's probability distribution for bi-objective problems.
We propose a novel acquisition function - $varepsilon$-PoHVI.
- Score: 5.586361810914231
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hypervolume improvement (HVI) is commonly employed in multi-objective Bayesian optimization algorithms to define acquisition functions due to its Pareto-compliant property. Rather than focusing on specific statistical moments of HVI, this work aims to provide the exact expression of HVI's probability distribution for bi-objective problems. Considering a bi-variate Gaussian random variable resulting from Gaussian process (GP) modeling, we derive the probability distribution of its hypervolume improvement via a cell partition-based method. Our exact expression is superior in numerical accuracy and computation efficiency compared to the Monte Carlo approximation of HVI's distribution. Utilizing this distribution, we propose a novel acquisition function - $\varepsilon$-probability of hypervolume improvement ($\varepsilon$-PoHVI). Experimentally, we show that on many widely-applied bi-objective test problems, $\varepsilon$-PoHVI significantly outperforms other related acquisition functions, e.g., $\varepsilon$-PoI, and expected hypervolume improvement, when the GP model exhibits a large the prediction uncertainty.
Related papers
- Using Gaussian Boson Samplers to Approximate Gaussian Expectation Problems [0.0]
We show that two estimators using GBS samples can bring an exponential speedup over the plain Monte Carlo (MC) estimator.
Precisely speaking, the exponential speedup is defined in terms of the guaranteed sample size for these estimators to reach the same level of accuracy.
arXiv Detail & Related papers (2025-02-26T17:30:49Z) - EigenVI: score-based variational inference with orthogonal function expansions [23.696028065251497]
EigenVI is an eigenvalue-based approach for black-box variational inference (BBVI)
We use EigenVI to approximate a variety of target distributions, including a benchmark suite of Bayesian models from posteriordb.
arXiv Detail & Related papers (2024-10-31T15:48:34Z) - Extending Mean-Field Variational Inference via Entropic Regularization: Theory and Computation [2.2656885622116394]
Variational inference (VI) has emerged as a popular method for approximate inference for high-dimensional Bayesian models.
We propose a novel VI method that extends the naive mean field via entropic regularization.
We show that $Xi$-variational posteriors effectively recover the true posterior dependency.
arXiv Detail & Related papers (2024-04-14T01:40:11Z) - Efficient expectation propagation for posterior approximation in
high-dimensional probit models [1.433758865948252]
We focus on the expectation propagation (EP) approximation of the posterior distribution in Bayesian probit regression.
We show how to leverage results on the extended multivariate skew-normal distribution to derive an efficient implementation of the EP routine.
This makes EP computationally feasible also in challenging high-dimensional settings, as shown in a detailed simulation study.
arXiv Detail & Related papers (2023-09-04T14:07:19Z) - Ensemble Multi-Quantiles: Adaptively Flexible Distribution Prediction
for Uncertainty Quantification [4.728311759896569]
We propose a novel, succinct, and effective approach for distribution prediction to quantify uncertainty in machine learning.
It incorporates adaptively flexible distribution prediction of $mathbbP(mathbfy|mathbfX=x)$ in regression tasks.
On extensive regression tasks from UCI datasets, we show that EMQ achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-26T11:45:32Z) - Manifold Gaussian Variational Bayes on the Precision Matrix [70.44024861252554]
We propose an optimization algorithm for Variational Inference (VI) in complex models.
We develop an efficient algorithm for Gaussian Variational Inference whose updates satisfy the positive definite constraint on the variational covariance matrix.
Due to its black-box nature, MGVBP stands as a ready-to-use solution for VI in complex models.
arXiv Detail & Related papers (2022-10-26T10:12:31Z) - Efficient Approximation of Expected Hypervolume Improvement using
Gauss-Hermite Quadrature [0.0]
Gauss-Hermite quadrature is an accurate alternative to Monte Carlo for both independent and correlated predictive densities.
We show it can be an accurate alternative to Monte Carlo for both independent and correlated predictive densities.
arXiv Detail & Related papers (2022-06-15T22:09:48Z) - Loss function based second-order Jensen inequality and its application
to particle variational inference [112.58907653042317]
Particle variational inference (PVI) uses an ensemble of models as an empirical approximation for the posterior distribution.
PVI iteratively updates each model with a repulsion force to ensure the diversity of the optimized models.
We derive a novel generalization error bound and show that it can be reduced by enhancing the diversity of models.
arXiv Detail & Related papers (2021-06-09T12:13:51Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - Randomised Gaussian Process Upper Confidence Bound for Bayesian
Optimisation [60.93091603232817]
We develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
This is done by sampling the exploration-exploitation trade-off parameter from a distribution.
We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret.
arXiv Detail & Related papers (2020-06-08T00:28:41Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.