Efficient Approximation of Expected Hypervolume Improvement using
Gauss-Hermite Quadrature
- URL: http://arxiv.org/abs/2206.07834v1
- Date: Wed, 15 Jun 2022 22:09:48 GMT
- Title: Efficient Approximation of Expected Hypervolume Improvement using
Gauss-Hermite Quadrature
- Authors: Alma Rahat, Tinkle Chugh, Jonathan Fieldsend, Richard Allmendinger,
Kaisa Miettinen
- Abstract summary: Gauss-Hermite quadrature is an accurate alternative to Monte Carlo for both independent and correlated predictive densities.
We show it can be an accurate alternative to Monte Carlo for both independent and correlated predictive densities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many methods for performing multi-objective optimisation of computationally
expensive problems have been proposed recently. Typically, a probabilistic
surrogate for each objective is constructed from an initial dataset. The
surrogates can then be used to produce predictive densities in the objective
space for any solution. Using the predictive densities, we can compute the
expected hypervolume improvement (EHVI) due to a solution. Maximising the EHVI,
we can locate the most promising solution that may be expensively evaluated
next. There are closed-form expressions for computing the EHVI, integrating
over the multivariate predictive densities. However, they require partitioning
the objective space, which can be prohibitively expensive for more than three
objectives. Furthermore, there are no closed-form expressions for a problem
where the predictive densities are dependent, capturing the correlations
between objectives. Monte Carlo approximation is used instead in such cases,
which is not cheap. Hence, the need to develop new accurate but cheaper
approximation methods remains. Here we investigate an alternative approach
toward approximating the EHVI using Gauss-Hermite quadrature. We show that it
can be an accurate alternative to Monte Carlo for both independent and
correlated predictive densities with statistically significant rank
correlations for a range of popular test problems.
Related papers
- Estimating Barycenters of Distributions with Neural Optimal Transport [93.28746685008093]
We propose a new scalable approach for solving the Wasserstein barycenter problem.
Our methodology is based on the recent Neural OT solver.
We also establish theoretical error bounds for our proposed approach.
arXiv Detail & Related papers (2024-02-06T09:17:07Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Robust leave-one-out cross-validation for high-dimensional Bayesian
models [0.0]
Leave-one-out cross-validation (LOO-CV) is a popular method for estimating out-of-sample predictive accuracy.
Here we propose and analyze a novel mixture estimator to compute LOO-CV criteria.
Our method retains the simplicity and computational convenience of classical approaches, while guaranteeing finite variance of the resulting estimators.
arXiv Detail & Related papers (2022-09-19T17:14:52Z) - Efficient first-order predictor-corrector multiple objective
optimization for fair misinformation detection [5.139559672771439]
Multiple-objective optimization (MOO) aims to simultaneously optimize multiple conflicting objectives and has found important applications in machine learning.
We propose a Gauss-Newton approximation that only scales linearly, and that requires only first-order inner-product per iteration.
The innovations make predictor-corrector possible for large networks.
arXiv Detail & Related papers (2022-09-15T12:32:15Z) - Beyond EM Algorithm on Over-specified Two-Component Location-Scale
Gaussian Mixtures [29.26015093627193]
We develop the Exponential Location Update (ELU) algorithm to efficiently explore the curvature of the negative log-likelihood functions.
We demonstrate that the ELU algorithm converges to the final statistical radius of the models after a logarithmic number of iterations.
arXiv Detail & Related papers (2022-05-23T06:49:55Z) - Fast Batch Nuclear-norm Maximization and Minimization for Robust Domain
Adaptation [154.2195491708548]
We study the prediction discriminability and diversity by studying the structure of the classification output matrix of a randomly selected data batch.
We propose Batch Nuclear-norm Maximization and Minimization, which performs nuclear-norm on the target output matrix to enhance the target prediction ability.
Experiments show that our method could boost the adaptation accuracy and robustness under three typical domain adaptation scenarios.
arXiv Detail & Related papers (2021-07-13T15:08:32Z) - The Perils of Learning Before Optimizing [16.97597806975415]
We show how prediction models can be learned end-to-end by differentiating through the optimization task.
We show that the performance gap between a two-stage and end-to-end approach is closely related to the emphprice of correlation concept in optimization.
arXiv Detail & Related papers (2021-06-18T20:43:47Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Cross-Entropy Method Variants for Optimization [0.0]
Cross-entropy (CE) method is a popular method for optimization due to its simplicity and effectiveness.
Certain objective functions may be computationally expensive to evaluate, and the CE-method could potentially get stuck in local minima.
We introduce novel variants of the CE-method to address these concerns.
arXiv Detail & Related papers (2020-09-18T19:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.