You Only Accept Samples Once: Fast, Self-Correcting Stochastic Variational Inference
- URL: http://arxiv.org/abs/2406.02838v1
- Date: Wed, 5 Jun 2024 01:28:53 GMT
- Title: You Only Accept Samples Once: Fast, Self-Correcting Stochastic Variational Inference
- Authors: Dominic B. Dayta,
- Abstract summary: YOASOVI is an algorithm for performing fast, self-correcting intuition optimization for Variational Inference (VI) on large Bayesian heirarchical models.
To accomplish this, we take advantage of available information on the objective function used for VI at each iteration and replace regular Monte Carlo sampling with acceptance sampling.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce YOASOVI, an algorithm for performing fast, self-correcting stochastic optimization for Variational Inference (VI) on large Bayesian heirarchical models. To accomplish this, we take advantage of available information on the objective function used for stochastic VI at each iteration and replace regular Monte Carlo sampling with acceptance sampling. Rather than spend computational resources drawing and evaluating over a large sample for the gradient, we draw only one sample and accept it with probability proportional to the expected improvement in the objective. The following paper develops two versions of the algorithm: the first one based on a naive intuition, and another building up the algorithm as a Metropolis-type scheme. Empirical results based on simulations and benchmark datasets for multivariate Gaussian mixture models show that YOASOVI consistently converges faster (in clock time) and within better optimal neighborhoods than both regularized Monte Carlo and Quasi-Monte Carlo VI algorithms.
Related papers
- Variational Inference with Gaussian Score Matching [1.2233362977312945]
We present a new approach to VI based on the principle of score matching.
We develop score matching VI, an iterative algorithm that seeks to match the scores between the variational approximation and the exact posterior.
In all of our studies we find that GSM-VI is faster than BBVI, but without sacrificing accuracy.
arXiv Detail & Related papers (2023-07-15T16:57:48Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Low-variance estimation in the Plackett-Luce model via quasi-Monte Carlo
sampling [58.14878401145309]
We develop a novel approach to producing more sample-efficient estimators of expectations in the PL model.
We illustrate our findings both theoretically and empirically using real-world recommendation data from Amazon Music and the Yahoo learning-to-rank challenge.
arXiv Detail & Related papers (2022-05-12T11:15:47Z) - Sampling Approximately Low-Rank Ising Models: MCMC meets Variational
Methods [35.24886589614034]
We consider quadratic definite Ising models on the hypercube with a general interaction $J$.
Our general result implies the first time sampling algorithms for low-rank Ising models.
arXiv Detail & Related papers (2022-02-17T21:43:50Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Direct sampling of projected entangled-pair states [0.0]
Variational Monte Carlo studies employing projected entangled-pair states (PEPS) have recently shown that they can provide answers on long-standing questions.
We propose a sampling algorithm that generates independent samples from a PEPS, bypassing all problems related to finite autocorrelation times.
arXiv Detail & Related papers (2021-09-15T15:09:20Z) - Adaptive Sequential SAA for Solving Two-stage Stochastic Linear Programs [1.6181085766811525]
We present adaptive sequential SAA (sample average approximation) algorithms to solve large-scale two-stage linear programs.
The proposed algorithm can be stopped in finite-time to return a solution endowed with a probabilistic guarantee on quality.
arXiv Detail & Related papers (2020-12-07T14:58:16Z) - Non-Adaptive Adaptive Sampling on Turnstile Streams [57.619901304728366]
We give the first relative-error algorithms for column subset selection, subspace approximation, projective clustering, and volume on turnstile streams that use space sublinear in $n$.
Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model.
arXiv Detail & Related papers (2020-04-23T05:00:21Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z) - Statistical Outlier Identification in Multi-robot Visual SLAM using
Expectation Maximization [18.259478519717426]
This paper introduces a novel and distributed method for detecting inter-map loop closure outliers in simultaneous localization and mapping (SLAM)
The proposed algorithm does not rely on a good initialization and can handle more than two maps at a time.
arXiv Detail & Related papers (2020-02-07T06:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.