Flexible mean field variational inference using mixtures of
non-overlapping exponential families
- URL: http://arxiv.org/abs/2010.06768v1
- Date: Wed, 14 Oct 2020 01:46:56 GMT
- Title: Flexible mean field variational inference using mixtures of
non-overlapping exponential families
- Authors: Jeffrey P. Spence
- Abstract summary: I show that using standard mean field variational inference can fail to produce sensible results for models with sparsity-inducing priors.
I show that any mixture of a diffuse exponential family and a point mass at zero to model sparsity forms an exponential family.
- Score: 6.599344783327053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse models are desirable for many applications across diverse domains as
they can perform automatic variable selection, aid interpretability, and
provide regularization. When fitting sparse models in a Bayesian framework,
however, analytically obtaining a posterior distribution over the parameters of
interest is intractable for all but the simplest cases. As a result
practitioners must rely on either sampling algorithms such as Markov chain
Monte Carlo or variational methods to obtain an approximate posterior. Mean
field variational inference is a particularly simple and popular framework that
is often amenable to analytically deriving closed-form parameter updates. When
all distributions in the model are members of exponential families and are
conditionally conjugate, optimization schemes can often be derived by hand.
Yet, I show that using standard mean field variational inference can fail to
produce sensible results for models with sparsity-inducing priors, such as the
spike-and-slab. Fortunately, such pathological behavior can be remedied as I
show that mixtures of exponential family distributions with non-overlapping
support form an exponential family. In particular, any mixture of a diffuse
exponential family and a point mass at zero to model sparsity forms an
exponential family. Furthermore, specific choices of these distributions
maintain conditional conjugacy. I use two applications to motivate these
results: one from statistical genetics that has connections to generalized
least squares with a spike-and-slab prior on the regression coefficients; and
sparse probabilistic principal component analysis. The theoretical results
presented here are broadly applicable beyond these two examples.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Likelihood Based Inference in Fully and Partially Observed Exponential Family Graphical Models with Intractable Normalizing Constants [4.532043501030714]
Probabilistic graphical models that encode an underlying Markov random field are fundamental building blocks of generative modeling.
This paper is to demonstrate that full likelihood based analysis of these models is feasible in a computationally efficient manner.
arXiv Detail & Related papers (2024-04-27T02:58:22Z) - Variational autoencoder with weighted samples for high-dimensional
non-parametric adaptive importance sampling [0.0]
We extend the existing framework to the case of weighted samples by introducing a new objective function.
In order to add flexibility to the model and to be able to learn multimodal distributions, we consider a learnable prior distribution.
We exploit the proposed procedure in existing adaptive importance sampling algorithms to draw points from a target distribution and to estimate a rare event probability in high dimension.
arXiv Detail & Related papers (2023-10-13T15:40:55Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - A Heavy-Tailed Algebra for Probabilistic Programming [53.32246823168763]
We propose a systematic approach for analyzing the tails of random variables.
We show how this approach can be used during the static analysis (before drawing samples) pass of a probabilistic programming language compiler.
Our empirical results confirm that inference algorithms that leverage our heavy-tailed algebra attain superior performance across a number of density modeling and variational inference tasks.
arXiv Detail & Related papers (2023-06-15T16:37:36Z) - Optimal regularizations for data generation with probabilistic graphical
models [0.0]
Empirically, well-chosen regularization schemes dramatically improve the quality of the inferred models.
We consider the particular case of L 2 and L 1 regularizations in the Maximum A Posteriori (MAP) inference of generative pairwise graphical models.
arXiv Detail & Related papers (2021-12-02T14:45:16Z) - Estimating Linear Mixed Effects Models with Truncated Normally
Distributed Random Effects [5.4052819252055055]
Inference can be conducted using maximum likelihood approach if assuming Normal distributions on the random effects.
In this paper we extend the classical (unconstrained) LME models to allow for sign constraints on its overall coefficients.
arXiv Detail & Related papers (2020-11-09T16:17:35Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z) - Decision-Making with Auto-Encoding Variational Bayes [71.44735417472043]
We show that a posterior approximation distinct from the variational distribution should be used for making decisions.
Motivated by these theoretical results, we propose learning several approximate proposals for the best model.
In addition to toy examples, we present a full-fledged case study of single-cell RNA sequencing.
arXiv Detail & Related papers (2020-02-17T19:23:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.