A Survey of Monte Carlo Methods for Parameter Estimation
- URL: http://arxiv.org/abs/2107.11820v1
- Date: Sun, 25 Jul 2021 14:57:58 GMT
- Title: A Survey of Monte Carlo Methods for Parameter Estimation
- Authors: D. Luengo, L. Martino, M. Bugallo, V. Elvira, S. S\"arkk\"a
- Abstract summary: This paper reviews Monte Carlo (MC) methods for the estimation of static parameters in signal processing applications.
A historical note on the development of MC schemes is also provided, followed by the basic MC method and a brief description of the rejection sampling (RS) algorithm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical signal processing applications usually require the estimation of
some parameters of interest given a set of observed data. These estimates are
typically obtained either by solving a multi-variate optimization problem, as
in the maximum likelihood (ML) or maximum a posteriori (MAP) estimators, or by
performing a multi-dimensional integration, as in the minimum mean squared
error (MMSE) estimators. Unfortunately, analytical expressions for these
estimators cannot be found in most real-world applications, and the Monte Carlo
(MC) methodology is one feasible approach. MC methods proceed by drawing random
samples, either from the desired distribution or from a simpler one, and using
them to compute consistent estimators. The most important families of MC
algorithms are Markov chain MC (MCMC) and importance sampling (IS). On the one
hand, MCMC methods draw samples from a proposal density, building then an
ergodic Markov chain whose stationary distribution is the desired distribution
by accepting or rejecting those candidate samples as the new state of the
chain. On the other hand, IS techniques draw samples from a simple proposal
density, and then assign them suitable weights that measure their quality in
some appropriate way. In this paper, we perform a thorough review of MC methods
for the estimation of static parameters in signal processing applications. A
historical note on the development of MC schemes is also provided, followed by
the basic MC method and a brief description of the rejection sampling (RS)
algorithm, as well as three sections describing many of the most relevant MCMC
and IS algorithms, and their combined use.
Related papers
- Algebraic Geometrical Analysis of Metropolis Algorithm When Parameters Are Non-identifiable [0.4604003661048266]
The Metropolis algorithm is one of the Markov chain Monte Carlo (MCMC) methods that realize sampling from the target probability distribution.
We are concerned with the sampling from the distribution in non-identifiable cases that involve models with Fisher information matrices that may fail to be invertible.
arXiv Detail & Related papers (2024-06-01T09:04:14Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Reverse Diffusion Monte Carlo [19.35592726471155]
We propose a novel Monte Carlo sampling algorithm called reverse diffusion Monte Carlo (rdMC)
rdMC is distinct from the Markov chain Monte Carlo (MCMC) methods.
arXiv Detail & Related papers (2023-07-05T05:42:03Z) - Ideal Observer Computation by Use of Markov-Chain Monte Carlo with
Generative Adversarial Networks [12.521662223741671]
The Ideal Observer (IO) has been advocated for use as a figure-of-merit (FOM) for evaluating and optimizing medical imaging systems.
A sampling-based method that employs Markov-Chain Monte Carlo (MCMC) techniques was previously proposed to estimate the IO performance.
In this study, a novel MCMC method that employs a generative adversarial network (GAN)-based SOM, referred to as MCMC-GAN, is described and evaluated.
arXiv Detail & Related papers (2023-04-02T02:51:50Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Low-variance estimation in the Plackett-Luce model via quasi-Monte Carlo
sampling [58.14878401145309]
We develop a novel approach to producing more sample-efficient estimators of expectations in the PL model.
We illustrate our findings both theoretically and empirically using real-world recommendation data from Amazon Music and the Yahoo learning-to-rank challenge.
arXiv Detail & Related papers (2022-05-12T11:15:47Z) - Compressed Monte Carlo with application in particle filtering [11.84836209560411]
We introduce the theory and practice of a Compressed MC (C-MC) scheme to compress the statistical information contained in a set of random samples.
C-MC is useful within particle filtering and adaptive IS algorithms, as shown by three novel schemes introduced in this work.
arXiv Detail & Related papers (2021-07-18T14:32:04Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Approximate MMAP by Marginal Search [78.50747042819503]
We present a strategy for marginal MAP queries in graphical models.
The proposed confidence measure is properly detecting instances for which the algorithm is accurate.
For sufficiently high confidence levels, the algorithm gives the exact solution or an approximation whose Hamming distance from the exact one is small.
arXiv Detail & Related papers (2020-02-12T07:41:13Z) - Markov-Chain Monte Carlo Approximation of the Ideal Observer using
Generative Adversarial Networks [14.792685152780795]
The Ideal Observer (IO) performance has been advocated when optimizing medical imaging systems for signal detection tasks.
To approximate the IO test statistic, sampling-based methods that employ Markov-Chain Monte Carlo (MCMC) techniques have been developed.
Deep learning methods that employ generative adversarial networks (GANs) hold great promise to learn object models from image data.
arXiv Detail & Related papers (2020-01-26T21:51:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.