Natural Gradient Variational Inference with Gaussian Mixture Models
- URL: http://arxiv.org/abs/2111.08002v1
- Date: Mon, 15 Nov 2021 20:04:32 GMT
- Title: Natural Gradient Variational Inference with Gaussian Mixture Models
- Authors: Farzaneh Mahdisoltani
- Abstract summary: Variational Inference (VI) methods approximate the posterior with a distribution usually chosen from a simple family using optimization.
The main contribution of this work is described is a set of update rules for natural gradient variational inference with mixture of Gaussians.
- Score: 1.7948767405202701
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bayesian methods estimate a measure of uncertainty by using the posterior
distribution. One source of difficulty in these methods is the computation of
the normalizing constant. Calculating exact posterior is generally intractable
and we usually approximate it. Variational Inference (VI) methods approximate
the posterior with a distribution usually chosen from a simple family using
optimization. The main contribution of this work is described is a set of
update rules for natural gradient variational inference with mixture of
Gaussians, which can be run independently for each of the mixture components,
potentially in parallel.
Related papers
- SoftCVI: Contrastive variational inference with self-generated soft labels [2.5398014196797614]
Variational inference and Markov chain Monte Carlo methods are the predominant tools for this task.
We introduce Soft Contrastive Variational Inference (SoftCVI), which allows a family of variational objectives to be derived through a contrastive estimation framework.
We find that SoftCVI can be used to form objectives which are stable to train and mass-covering, frequently outperforming inference with other variational approaches.
arXiv Detail & Related papers (2024-07-22T14:54:12Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Variational Elliptical Processes [1.5703073293718952]
We present elliptical processes, a family of non-parametric probabilistic models that subsume processes and Student's posterior processes.
We parameterize this mixture distribution as a spline normalizing flow, which we train using variational inference.
The proposed form of the variational posterior enables a sparse variational elliptical process applicable to large-scale problems.
arXiv Detail & Related papers (2023-11-21T12:26:14Z) - Robust scalable initialization for Bayesian variational inference with
multi-modal Laplace approximations [0.0]
Variational mixtures with full-covariance structures suffer from a quadratic growth due to variational parameters with the number of parameters.
We propose a method for constructing an initial Gaussian model approximation that can be used to warm-start variational inference.
arXiv Detail & Related papers (2023-07-12T19:30:04Z) - A Quadrature Rule combining Control Variates and Adaptive Importance
Sampling [0.0]
We show that a simple weighted least squares approach can be used to improve the accuracy of Monte Carlo integration estimates.
Our main result is a non-asymptotic bound on the probabilistic error of the procedure.
The good behavior of the method is illustrated empirically on synthetic examples and real-world data for Bayesian linear regression.
arXiv Detail & Related papers (2022-05-24T08:21:45Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.