Generalised Gaussian Process Latent Variable Models (GPLVM) with
Stochastic Variational Inference
- URL: http://arxiv.org/abs/2202.12979v1
- Date: Fri, 25 Feb 2022 21:21:51 GMT
- Title: Generalised Gaussian Process Latent Variable Models (GPLVM) with
Stochastic Variational Inference
- Authors: Vidhi Lalchand, Aditya Ravuri, Neil D. Lawrence
- Abstract summary: We study the doubly formulation of the BayesianVM model amenable with minibatch training.
We show how this framework is compatible with different latent variable formulations and perform experiments to compare a suite of models.
We demonstrate how we can train in the presence of massively missing data and obtain high-fidelity reconstructions.
- Score: 9.468270453795409
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian process latent variable models (GPLVM) are a flexible and non-linear
approach to dimensionality reduction, extending classical Gaussian processes to
an unsupervised learning context. The Bayesian incarnation of the GPLVM Titsias
and Lawrence, 2010] uses a variational framework, where the posterior over
latent variables is approximated by a well-behaved variational family, a
factorized Gaussian yielding a tractable lower bound. However, the
non-factories ability of the lower bound prevents truly scalable inference. In
this work, we study the doubly stochastic formulation of the Bayesian GPLVM
model amenable with minibatch training. We show how this framework is
compatible with different latent variable formulations and perform experiments
to compare a suite of models. Further, we demonstrate how we can train in the
presence of massively missing data and obtain high-fidelity reconstructions. We
demonstrate the model's performance by benchmarking against the canonical
sparse GPLVM for high-dimensional data examples.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Preventing Model Collapse in Gaussian Process Latent Variable Models [11.45681373843122]
This paper theoretically examines the impact of projection variance on model collapse through the lens of a linear FourierVM.
We tackle model collapse due to inadequate kernel flexibility by integrating the spectral mixture (SM) kernel and a differentiable random feature (RFF) kernel approximation.
The proposedVM, named advisedRFLVM, is evaluated across diverse datasets and consistently outperforms various competing models.
arXiv Detail & Related papers (2024-04-02T06:58:41Z) - Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - Multi-Response Heteroscedastic Gaussian Process Models and Their
Inference [1.52292571922932]
We propose a novel framework for the modeling of heteroscedastic covariance functions.
We employ variational inference to approximate the posterior and facilitate posterior predictive modeling.
We show that our proposed framework offers a robust and versatile tool for a wide array of applications.
arXiv Detail & Related papers (2023-08-29T15:06:47Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - Bayesian Non-linear Latent Variable Modeling via Random Fourier Features [7.856578780790166]
We present a method to perform Markov chain Monte Carlo inference for generalized nonlinear latent variable modeling.
Inference forVMs is computationally tractable only when the data likelihood is Gaussian.
We show that we can generalizeVMs to non-Gaussian observations, such as Poisson, negative binomial, and multinomial distributions.
arXiv Detail & Related papers (2023-06-14T08:42:10Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Generalized Matrix Factorization: efficient algorithms for fitting
generalized linear latent variable models to large data arrays [62.997667081978825]
Generalized Linear Latent Variable models (GLLVMs) generalize such factor models to non-Gaussian responses.
Current algorithms for estimating model parameters in GLLVMs require intensive computation and do not scale to large datasets.
We propose a new approach for fitting GLLVMs to high-dimensional datasets, based on approximating the model using penalized quasi-likelihood.
arXiv Detail & Related papers (2020-10-06T04:28:19Z) - Sparse Gaussian Processes with Spherical Harmonic Features [14.72311048788194]
We introduce a new class of inter-domain variational Gaussian processes (GP)
Our inference scheme is comparable to variational Fourier features, but it does not suffer from the curse of dimensionality.
Our experiments show that our model is able to fit a regression model for a dataset with 6 million entries two orders of magnitude faster.
arXiv Detail & Related papers (2020-06-30T10:19:32Z) - On the Variational Posterior of Dirichlet Process Deep Latent Gaussian
Mixture Models [0.0]
We present an alternative treatment of the variational posterior of the Dirichlet Process Deep Latent Gaussian Mixture Model (DP-DLGMM)
We show that our model is capable of generating realistic samples for each cluster obtained, and manifests competitive performance in a semi-supervised setting.
arXiv Detail & Related papers (2020-06-16T08:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.