Scalable Bayesian Transformed Gaussian Processes
- URL: http://arxiv.org/abs/2210.10973v1
- Date: Thu, 20 Oct 2022 02:45:10 GMT
- Title: Scalable Bayesian Transformed Gaussian Processes
- Authors: Xinran Zhu, Leo Huang, Cameron Ibrahim, Eric Hans Lee, David Bindel
- Abstract summary: The Bayesian transformed Gaussian process (BTG) model is a fully Bayesian counterpart to the warped Gaussian process (WGP)
We propose principled and fast techniques for computing with BTG.
Our framework uses doubly sparse quadrature rules, tight quantile bounds, and rank-one matrix algebra to enable both fast model prediction and model selection.
- Score: 10.33253403416662
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Bayesian transformed Gaussian process (BTG) model, proposed by Kedem and
Oliviera, is a fully Bayesian counterpart to the warped Gaussian process (WGP)
and marginalizes out a joint prior over input warping and kernel
hyperparameters. This fully Bayesian treatment of hyperparameters often
provides more accurate regression estimates and superior uncertainty
propagation, but is prohibitively expensive. The BTG posterior predictive
distribution, itself estimated through high-dimensional integration, must be
inverted in order to perform model prediction. To make the Bayesian approach
practical and comparable in speed to maximum-likelihood estimation (MLE), we
propose principled and fast techniques for computing with BTG. Our framework
uses doubly sparse quadrature rules, tight quantile bounds, and rank-one matrix
algebra to enable both fast model prediction and model selection. These
scalable methods allow us to regress over higher-dimensional datasets and apply
BTG with layered transformations that greatly improve its expressibility. We
demonstrate that BTG achieves superior empirical performance over MLE-based
models.
Related papers
- Flexible Bayesian Last Layer Models Using Implicit Priors and Diffusion Posterior Sampling [7.084307990641011]
We introduce a novel approach that combines diffusion techniques and implicit priors for variational learning of Bayesian last layer weights.
By delivering an explicit and computationally efficient variational lower bound, our method aims to augment the expressive abilities of BLL models.
arXiv Detail & Related papers (2024-08-07T12:59:58Z) - Manifold Gaussian Variational Bayes on the Precision Matrix [70.44024861252554]
We propose an optimization algorithm for Variational Inference (VI) in complex models.
We develop an efficient algorithm for Gaussian Variational Inference whose updates satisfy the positive definite constraint on the variational covariance matrix.
Due to its black-box nature, MGVBP stands as a ready-to-use solution for VI in complex models.
arXiv Detail & Related papers (2022-10-26T10:12:31Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Scalable Gaussian Process Hyperparameter Optimization via Coverage
Regularization [0.0]
We present a novel algorithm which estimates the smoothness and length-scale parameters in the Matern kernel in order to improve robustness of the resulting prediction uncertainties.
We achieve improved UQ over leave-one-out likelihood while maintaining a high degree of scalability as demonstrated in numerical experiments.
arXiv Detail & Related papers (2022-09-22T19:23:37Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Bayesian Active Learning with Fully Bayesian Gaussian Processes [0.0]
In active learning, where labeled data is scarce or difficult to obtain, neglecting this trade-off can cause inefficient querying.
We show that incorporating the bias-variance trade-off in the acquisition functions mitigates unnecessary and expensive data labeling.
arXiv Detail & Related papers (2022-05-20T13:52:04Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - Sparse Gaussian Processes Revisited: Bayesian Approaches to
Inducing-Variable Approximations [27.43948386608]
Variational inference techniques based on inducing variables provide an elegant framework for scalable estimation in Gaussian process (GP) models.
In this work we challenge the common wisdom that optimizing the inducing inputs in variational framework yields optimal performance.
arXiv Detail & Related papers (2020-03-06T08:53:18Z) - Approximate Inference for Fully Bayesian Gaussian Process Regression [11.47317712333228]
Learning in Gaussian Process models occurs through the adaptation of hyper parameters of the mean and the covariance function.
An alternative learning procedure is to infer the posterior over hyper parameters in a hierarchical specification of GPs we call textitFully Bayesian Gaussian Process Regression (GPR)
We analyze the predictive performance for fully Bayesian GPR on a range of benchmark data sets.
arXiv Detail & Related papers (2019-12-31T17:18:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.