Latent variable modeling with random features
- URL: http://arxiv.org/abs/2006.11145v1
- Date: Fri, 19 Jun 2020 14:12:05 GMT
- Title: Latent variable modeling with random features
- Authors: Gregory W. Gundersen, Michael Minyi Zhang, Barbara E. Engelhardt
- Abstract summary: We develop a family of nonlinear dimension reduction models that are easily to non-Gaussian data likelihoods.
Our generalized RFLVMs produce results comparable with other state-of-the-art dimension reduction methods.
- Score: 7.856578780790166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaussian process-based latent variable models are flexible and theoretically
grounded tools for nonlinear dimension reduction, but generalizing to
non-Gaussian data likelihoods within this nonlinear framework is statistically
challenging. Here, we use random features to develop a family of nonlinear
dimension reduction models that are easily extensible to non-Gaussian data
likelihoods; we call these random feature latent variable models (RFLVMs). By
approximating a nonlinear relationship between the latent space and the
observations with a function that is linear with respect to random features, we
induce closed-form gradients of the posterior distribution with respect to the
latent variable. This allows the RFLVM framework to support computationally
tractable nonlinear latent variable models for a variety of data likelihoods in
the exponential family without specialized derivations. Our generalized RFLVMs
produce results comparable with other state-of-the-art dimension reduction
methods on diverse types of data, including neural spike train recordings,
images, and text data.
Related papers
- Scalable Random Feature Latent Variable Models [8.816134440622696]
We introduce a stick-breaking construction for Dirichlet process (DP) to obtain an explicit PDF and a novel VBI algorithm called block coordinate descent variational inference" (BCD-VI)
This enables the development of a scalable version of RFLVMs, or in short, SRFLVM.
arXiv Detail & Related papers (2024-10-23T09:22:43Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Preventing Model Collapse in Gaussian Process Latent Variable Models [11.45681373843122]
This paper theoretically examines the impact of projection variance on model collapse through the lens of a linear FourierVM.
We tackle model collapse due to inadequate kernel flexibility by integrating the spectral mixture (SM) kernel and a differentiable random feature (RFF) kernel approximation.
The proposedVM, named advisedRFLVM, is evaluated across diverse datasets and consistently outperforms various competing models.
arXiv Detail & Related papers (2024-04-02T06:58:41Z) - Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Bayesian Non-linear Latent Variable Modeling via Random Fourier Features [7.856578780790166]
We present a method to perform Markov chain Monte Carlo inference for generalized nonlinear latent variable modeling.
Inference forVMs is computationally tractable only when the data likelihood is Gaussian.
We show that we can generalizeVMs to non-Gaussian observations, such as Poisson, negative binomial, and multinomial distributions.
arXiv Detail & Related papers (2023-06-14T08:42:10Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Learning from few examples with nonlinear feature maps [68.8204255655161]
We explore the phenomenon and reveal key relationships between dimensionality of AI model's feature space, non-degeneracy of data distributions, and the model's generalisation capabilities.
The main thrust of our present analysis is on the influence of nonlinear feature transformations mapping original data into higher- and possibly infinite-dimensional spaces on the resulting model's generalisation capabilities.
arXiv Detail & Related papers (2022-03-31T10:36:50Z) - Generalised Gaussian Process Latent Variable Models (GPLVM) with
Stochastic Variational Inference [9.468270453795409]
We study the doubly formulation of the BayesianVM model amenable with minibatch training.
We show how this framework is compatible with different latent variable formulations and perform experiments to compare a suite of models.
We demonstrate how we can train in the presence of massively missing data and obtain high-fidelity reconstructions.
arXiv Detail & Related papers (2022-02-25T21:21:51Z) - Structural Sieves [0.0]
We show that certain deep networks are particularly well suited as a nonparametric sieve to approximate regression functions.
We show that restrictions of this kind are imposed in a more straightforward manner if a sufficiently flexible version of the latent variable model is in fact used to approximate the unknown regression function.
arXiv Detail & Related papers (2021-12-01T16:37:02Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.