On MCMC for variationally sparse Gaussian processes: A pseudo-marginal
approach
- URL: http://arxiv.org/abs/2103.03321v1
- Date: Thu, 4 Mar 2021 20:48:29 GMT
- Title: On MCMC for variationally sparse Gaussian processes: A pseudo-marginal
approach
- Authors: Karla Monterrubio-G\'omez and Sara Wade
- Abstract summary: Gaussian processes (GPs) are frequently used in machine learning and statistics to construct powerful models.
We propose a pseudo-marginal (PM) scheme that offers exact inference as well as computational gains through doubly estimators for the likelihood and large datasets.
- Score: 0.76146285961466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian processes (GPs) are frequently used in machine learning and
statistics to construct powerful models. However, when employing GPs in
practice, important considerations must be made, regarding the high
computational burden, approximation of the posterior, choice of the covariance
function and inference of its hyperparmeters. To address these issues, Hensman
et al. (2015) combine variationally sparse GPs with Markov chain Monte Carlo
(MCMC) to derive a scalable, flexible and general framework for GP models.
Nevertheless, the resulting approach requires intractable likelihood
evaluations for many observation models. To bypass this problem, we propose a
pseudo-marginal (PM) scheme that offers asymptotically exact inference as well
as computational gains through doubly stochastic estimators for the intractable
likelihood and large datasets. In complex models, the advantages of the PM
scheme are particularly evident, and we demonstrate this on a two-level GP
regression model with a nonparametric covariance function to capture
non-stationarity.
Related papers
- Gaussian Process Regression with Soft Inequality and Monotonicity Constraints [0.0]
We introduce a new GP method that enforces the physical constraints in a probabilistic manner.
This GP model is trained by the quantum-inspired Hamiltonian Monte Carlo (QHMC)
arXiv Detail & Related papers (2024-04-03T17:09:25Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - Sparse Gaussian Process Hyperparameters: Optimize or Integrate? [5.949779668853556]
We propose an algorithm for sparse Gaussian process regression which leverages MCMC to sample from the hyperparameter posterior.
We compare this scheme against natural baselines in literature along with variational GPs (SVGPs) along with an extensive computational analysis.
arXiv Detail & Related papers (2022-11-04T14:06:59Z) - Non-Gaussian Process Regression [0.0]
We extend the GP framework into a new class of time-changed GPs that allow for straightforward modelling of heavy-tailed non-Gaussian behaviours.
We present Markov chain Monte Carlo inference procedures for this model and demonstrate the potential benefits.
arXiv Detail & Related papers (2022-09-07T13:08:22Z) - Scalable mixed-domain Gaussian processes [0.0]
We derive a basis function approximation scheme for mixed-domain covariance functions.
The proposed approach is naturally applicable to Bayesian GP regression with arbitrary observation models.
We demonstrate the approach in a longitudinal data modelling context and show that it approximates the exact GP model accurately.
arXiv Detail & Related papers (2021-11-03T04:47:37Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - MuyGPs: Scalable Gaussian Process Hyperparameter Estimation Using Local
Cross-Validation [1.2233362977312945]
We present MuyGPs, a novel efficient GP hyper parameter estimation method.
MuyGPs builds upon prior methods that take advantage of the nearest neighbors structure of the data.
We show that our method outperforms all known competitors both in terms of time-to-solution and the root mean squared error of the predictions.
arXiv Detail & Related papers (2021-04-29T18:10:21Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.