Approximate Inference for Fully Bayesian Gaussian Process Regression
- URL: http://arxiv.org/abs/1912.13440v2
- Date: Mon, 6 Apr 2020 14:22:14 GMT
- Title: Approximate Inference for Fully Bayesian Gaussian Process Regression
- Authors: Vidhi Lalchand and Carl Edward Rasmussen
- Abstract summary: Learning in Gaussian Process models occurs through the adaptation of hyper parameters of the mean and the covariance function.
An alternative learning procedure is to infer the posterior over hyper parameters in a hierarchical specification of GPs we call textitFully Bayesian Gaussian Process Regression (GPR)
We analyze the predictive performance for fully Bayesian GPR on a range of benchmark data sets.
- Score: 11.47317712333228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning in Gaussian Process models occurs through the adaptation of
hyperparameters of the mean and the covariance function. The classical approach
entails maximizing the marginal likelihood yielding fixed point estimates (an
approach called \textit{Type II maximum likelihood} or ML-II). An alternative
learning procedure is to infer the posterior over hyperparameters in a
hierarchical specification of GPs we call \textit{Fully Bayesian Gaussian
Process Regression} (GPR). This work considers two approximation schemes for
the intractable hyperparameter posterior: 1) Hamiltonian Monte Carlo (HMC)
yielding a sampling-based approximation and 2) Variational Inference (VI) where
the posterior over hyperparameters is approximated by a factorized Gaussian
(mean-field) or a full-rank Gaussian accounting for correlations between
hyperparameters. We analyze the predictive performance for fully Bayesian GPR
on a range of benchmark data sets.
Related papers
- Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - Sparse Gaussian Process Hyperparameters: Optimize or Integrate? [5.949779668853556]
We propose an algorithm for sparse Gaussian process regression which leverages MCMC to sample from the hyperparameter posterior.
We compare this scheme against natural baselines in literature along with variational GPs (SVGPs) along with an extensive computational analysis.
arXiv Detail & Related papers (2022-11-04T14:06:59Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Noise Estimation in Gaussian Process Regression [1.5002438468152661]
The presented method can be used to estimate the variance of the correlated error, and the variance of the noise based on maximizing a marginal likelihood function.
We demonstrate the computational advantages and robustness of the presented approach compared to traditional parameter optimization.
arXiv Detail & Related papers (2022-06-20T19:36:03Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Laplace Matching for fast Approximate Inference in Generalized Linear
Models [27.70274403550477]
We propose an approximate inference framework primarily designed to be computationally cheap while still achieving high approximation quality.
The concept, which we call emphLaplace Matching, involves closed-form, approximate, bi-directional transformations between the parameter spaces of exponential families.
This effectively turns inference in GLMs into conjugate inference (with small approximation errors)
arXiv Detail & Related papers (2021-05-07T08:25:17Z) - Marginalised Gaussian Processes with Nested Sampling [10.495114898741203]
Gaussian Process (GPs) models are a rich distribution over functions with inductive biases controlled by a kernel function.
This work presents an alternative learning procedure where the hyperparameters of the kernel function are marginalised using Nested Sampling (NS)
arXiv Detail & Related papers (2020-10-30T16:04:35Z) - Variable selection for Gaussian process regression through a sparse
projection [0.802904964931021]
This paper presents a new variable selection approach integrated with Gaussian process (GP) regression.
The choice of tuning parameters and the accuracy of the estimation are evaluated with the simulation some chosen benchmark approaches.
arXiv Detail & Related papers (2020-08-25T01:06:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.