Persistent Sampling: Unleashing the Potential of Sequential Monte Carlo
- URL: http://arxiv.org/abs/2407.20722v1
- Date: Tue, 30 Jul 2024 10:34:40 GMT
- Title: Persistent Sampling: Unleashing the Potential of Sequential Monte Carlo
- Authors: Minas Karamanis, Uroš Seljak,
- Abstract summary: We introduce persistent sampling (PS), an extension of Sequential Monte Carlo (SMC) methods.
PS generates a growing, weighted ensemble of particles distributed across iterations.
PS consistently outperforms standard methods, achieving lower squared bias in posterior moment estimation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sequential Monte Carlo (SMC) methods are powerful tools for Bayesian inference but suffer from requiring many particles for accurate estimates, leading to high computational costs. We introduce persistent sampling (PS), an extension of SMC that mitigates this issue by allowing particles from previous iterations to persist. This generates a growing, weighted ensemble of particles distributed across iterations. In each iteration, PS utilizes multiple importance sampling and resampling from the mixture of all previous distributions to produce the next generation of particles. This addresses particle impoverishment and mode collapse, resulting in more accurate posterior approximations. Furthermore, this approach provides lower-variance marginal likelihood estimates for model comparison. Additionally, the persistent particles improve transition kernel adaptation for efficient exploration. Experiments on complex distributions show that PS consistently outperforms standard methods, achieving lower squared bias in posterior moment estimation and significantly reduced marginal likelihood errors, all at a lower computational cost. PS offers a robust, efficient, and scalable framework for Bayesian inference.
Related papers
- Multilevel Generative Samplers for Investigating Critical Phenomena [3.8160065878097797]
Long-range correlations cause critical slowing down in Markov chain Monte Carlo.
We propose a novel sampler specialized for near-critical systems.
We show that the effective sample size RiGCS is few orders of magnitude higher than state-of-the-art generative model baselines.
arXiv Detail & Related papers (2025-03-11T22:03:54Z) - Optimizing Singular Spectrum for Large Language Model Compression [95.7621116637755]
We introduce SoCo, a novel compression framework that learns to rescale the decomposed components of SVD in a data-driven manner.
Thanks to the learnable singular spectrum, SoCo adaptively prunes components according to the sparsified importance scores.
Experimental evaluations across multiple LLMs and benchmarks demonstrate that SoCo surpasses the state-of-the-art methods in model compression.
arXiv Detail & Related papers (2025-02-20T23:18:39Z) - von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Fast Semisupervised Unmixing Using Nonconvex Optimization [80.11512905623417]
We introduce a novel convex convex model for semi/library-based unmixing.
We demonstrate the efficacy of Alternating Methods of sparse unsupervised unmixing.
arXiv Detail & Related papers (2024-01-23T10:07:41Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals [3.4240632942024685]
We consider the problem of sampling from a distribution governed by a potential function.
This work proposes an explicit score based MCMC method that is deterministic, resulting in a deterministic evolution for particles.
arXiv Detail & Related papers (2023-08-28T23:51:33Z) - Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging [24.64264715041198]
We introduce Sparse Model Soups (SMS), a novel method for merging sparse models by initiating each prune-retrain cycle with the averaged model from the previous phase.
SMS preserves sparsity, exploits sparse network benefits, is modular and fully parallelizable, and substantially improves IMP's performance.
arXiv Detail & Related papers (2023-06-29T08:49:41Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Plug-and-Play split Gibbs sampler: embedding deep generative priors in
Bayesian inference [12.91637880428221]
This paper introduces a plug-and-play sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution.
It divides the challenging task of posterior sampling into two simpler sampling problems.
Its performance is compared to recent state-of-the-art optimization and sampling methods.
arXiv Detail & Related papers (2023-04-21T17:17:51Z) - Bayesian Pseudo-Coresets via Contrastive Divergence [5.479797073162603]
We introduce a novel approach for constructing pseudo-coresets by utilizing contrastive divergence.
It eliminates the need for approximations in the pseudo-coreset construction process.
We conduct extensive experiments on multiple datasets, demonstrating its superiority over existing BPC techniques.
arXiv Detail & Related papers (2023-03-20T17:13:50Z) - Preconditioned Score-based Generative Models [49.88840603798831]
An intuitive acceleration method is to reduce the sampling iterations which however causes severe performance degradation.
We propose a model-agnostic bfem preconditioned diffusion sampling (PDS) method that leverages matrix preconditioning to alleviate the aforementioned problem.
PDS alters the sampling process of a vanilla SGM at marginal extra computation cost, and without model retraining.
arXiv Detail & Related papers (2023-02-13T16:30:53Z) - Efficient Propagation of Uncertainty via Reordering Monte Carlo Samples [0.7087237546722617]
Uncertainty propagation is a technique to determine model output uncertainties based on the uncertainty in its input variables.
In this work, we investigate the hypothesis that while all samples are useful on average, some samples must be more useful than others.
We introduce a methodology to adaptively reorder MC samples and show how it results in reduction of computational expense of UP processes.
arXiv Detail & Related papers (2023-02-09T21:28:15Z) - Post-Processing Temporal Action Detection [134.26292288193298]
Temporal Action Detection (TAD) methods typically take a pre-processing step in converting an input varying-length video into a fixed-length snippet representation sequence.
This pre-processing step would temporally downsample the video, reducing the inference resolution and hampering the detection performance in the original temporal resolution.
We introduce a novel model-agnostic post-processing method without model redesign and retraining.
arXiv Detail & Related papers (2022-11-27T19:50:37Z) - Convergence for score-based generative modeling with polynomial
complexity [9.953088581242845]
We prove the first convergence guarantees for the core mechanic behind Score-based generative modeling.
Compared to previous works, we do not incur error that grows exponentially in time or that suffers from a curse of dimensionality.
We show that a predictor-corrector gives better convergence than using either portion alone.
arXiv Detail & Related papers (2022-06-13T14:57:35Z) - Posterior Coreset Construction with Kernelized Stein Discrepancy for
Model-Based Reinforcement Learning [78.30395044401321]
We develop a novel model-based approach to reinforcement learning (MBRL)
It relaxes the assumptions on the target transition model to belong to a generic family of mixture models.
It can achieve up-to 50 percent reduction in wall clock time in some continuous control environments.
arXiv Detail & Related papers (2022-06-02T17:27:49Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Scalable Spatiotemporally Varying Coefficient Modelling with Bayesian Kernelized Tensor Regression [17.158289775348063]
Kernelized tensor Regression (BKTR) can be considered a new and scalable approach to modeling processes with low-rank cotemporal structure.
We conduct extensive experiments on both synthetic and real-world data sets, and our results confirm the superior performance and efficiency of BKTR for model estimation and inference.
arXiv Detail & Related papers (2021-08-31T19:22:23Z) - What Are Bayesian Neural Network Posteriors Really Like? [63.950151520585024]
We show that Hamiltonian Monte Carlo can achieve significant performance gains over standard and deep ensembles.
We also show that deep distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.
arXiv Detail & Related papers (2021-04-29T15:38:46Z) - Slice Sampling for General Completely Random Measures [74.24975039689893]
We present a novel Markov chain Monte Carlo algorithm for posterior inference that adaptively sets the truncation level using auxiliary slice variables.
The efficacy of the proposed algorithm is evaluated on several popular nonparametric models.
arXiv Detail & Related papers (2020-06-24T17:53:53Z) - Stacking for Non-mixing Bayesian Computations: The Curse and Blessing of
Multimodal Posteriors [8.11978827493967]
We propose an approach using parallel runs of MCMC, variational, or mode-based inference to hit as many modes as possible.
We present theoretical consistency with an example where the stacked inference process approximates the true data.
We demonstrate practical implementation in several model families.
arXiv Detail & Related papers (2020-06-22T15:26:59Z) - Efficiently Sampling Functions from Gaussian Process Posteriors [76.94808614373609]
We propose an easy-to-use and general-purpose approach for fast posterior sampling.
We demonstrate how decoupled sample paths accurately represent Gaussian process posteriors at a fraction of the usual cost.
arXiv Detail & Related papers (2020-02-21T14:03:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.