Compressed Monte Carlo with application in particle filtering
- URL: http://arxiv.org/abs/2107.08459v1
- Date: Sun, 18 Jul 2021 14:32:04 GMT
- Title: Compressed Monte Carlo with application in particle filtering
- Authors: Luca Martino, V\'ictor Elvira
- Abstract summary: We introduce the theory and practice of a Compressed MC (C-MC) scheme to compress the statistical information contained in a set of random samples.
C-MC is useful within particle filtering and adaptive IS algorithms, as shown by three novel schemes introduced in this work.
- Score: 11.84836209560411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian models have become very popular over the last years in several
fields such as signal processing, statistics, and machine learning. Bayesian
inference requires the approximation of complicated integrals involving
posterior distributions. For this purpose, Monte Carlo (MC) methods, such as
Markov Chain Monte Carlo and importance sampling algorithms, are often
employed. In this work, we introduce the theory and practice of a Compressed MC
(C-MC) scheme to compress the statistical information contained in a set of
random samples. In its basic version, C-MC is strictly related to the
stratification technique, a well-known method used for variance reduction
purposes. Deterministic C-MC schemes are also presented, which provide very
good performance. The compression problem is strictly related to the moment
matching approach applied in different filtering techniques, usually called as
Gaussian quadrature rules or sigma-point methods. C-MC can be employed in a
distributed Bayesian inference framework when cheap and fast communications
with a central processor are required. Furthermore, C-MC is useful within
particle filtering and adaptive IS algorithms, as shown by three novel schemes
introduced in this work. Six numerical results confirm the benefits of the
introduced schemes, outperforming the corresponding benchmark methods. A
related code is also provided.
Related papers
- Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - SpreadNUTS -- Moderate Dynamic Extension of Paths for No-U-Turn Sampling
& Partitioning Visited Regions [0.0]
This paper introduces modifications to a specific Hamiltonian Monte Carlo (HMC) algorithm known as the no-U-turn sampler (NUTS)
NUTS aims to explore the sample space faster than NUTS, yielding a sampler that has faster convergence to the true distribution than NUTS.
arXiv Detail & Related papers (2023-07-09T05:00:25Z) - Reverse Diffusion Monte Carlo [19.35592726471155]
We propose a novel Monte Carlo sampling algorithm called reverse diffusion Monte Carlo (rdMC)
rdMC is distinct from the Markov chain Monte Carlo (MCMC) methods.
arXiv Detail & Related papers (2023-07-05T05:42:03Z) - Parallel Approaches to Accelerate Bayesian Decision Trees [1.9728521995447947]
We propose two methods for exploiting parallelism in the MCMC.
In the first, we replace the MCMC with another numerical Bayesian approach.
In the second, we consider data partitioning.
arXiv Detail & Related papers (2023-01-22T09:56:26Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - A Survey of Monte Carlo Methods for Parameter Estimation [0.0]
This paper reviews Monte Carlo (MC) methods for the estimation of static parameters in signal processing applications.
A historical note on the development of MC schemes is also provided, followed by the basic MC method and a brief description of the rejection sampling (RS) algorithm.
arXiv Detail & Related papers (2021-07-25T14:57:58Z) - Annealed Flow Transport Monte Carlo [91.20263039913912]
Annealed Flow Transport (AFT) builds upon Annealed Importance Sampling (AIS) and Sequential Monte Carlo (SMC)
AFT relies on NF which is learned sequentially to push particles towards the successive targets.
We show that a continuous-time scaling limit of the population version of AFT is given by a Feynman--Kac measure.
arXiv Detail & Related papers (2021-02-15T12:05:56Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Kernel learning approaches for summarising and combining posterior
similarity matrices [68.8204255655161]
We build upon the notion of the posterior similarity matrix (PSM) in order to suggest new approaches for summarising the output of MCMC algorithms for Bayesian clustering models.
A key contribution of our work is the observation that PSMs are positive semi-definite, and hence can be used to define probabilistically-motivated kernel matrices.
arXiv Detail & Related papers (2020-09-27T14:16:14Z) - Markov-Chain Monte Carlo Approximation of the Ideal Observer using
Generative Adversarial Networks [14.792685152780795]
The Ideal Observer (IO) performance has been advocated when optimizing medical imaging systems for signal detection tasks.
To approximate the IO test statistic, sampling-based methods that employ Markov-Chain Monte Carlo (MCMC) techniques have been developed.
Deep learning methods that employ generative adversarial networks (GANs) hold great promise to learn object models from image data.
arXiv Detail & Related papers (2020-01-26T21:51:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.