Unsupervised Learning of Sampling Distributions for Particle Filters
- URL: http://arxiv.org/abs/2302.01174v1
- Date: Thu, 2 Feb 2023 15:50:21 GMT
- Title: Unsupervised Learning of Sampling Distributions for Particle Filters
- Authors: Fernando Gama, Nicolas Zilberstein, Martin Sevilla, Richard Baraniuk,
Santiago Segarra
- Abstract summary: We put forward four methods for learning sampling distributions from observed measurements.
Experiments demonstrate that learned sampling distributions exhibit better performance than designed, minimum-degeneracy sampling distributions.
- Score: 80.6716888175925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate estimation of the states of a nonlinear dynamical system is crucial
for their design, synthesis, and analysis. Particle filters are estimators
constructed by simulating trajectories from a sampling distribution and
averaging them based on their importance weight. For particle filters to be
computationally tractable, it must be feasible to simulate the trajectories by
drawing from the sampling distribution. Simultaneously, these trajectories need
to reflect the reality of the nonlinear dynamical system so that the resulting
estimators are accurate. Thus, the crux of particle filters lies in designing
sampling distributions that are both easy to sample from and lead to accurate
estimators. In this work, we propose to learn the sampling distributions. We
put forward four methods for learning sampling distributions from observed
measurements. Three of the methods are parametric methods in which we learn the
mean and covariance matrix of a multivariate Gaussian distribution; each
methods exploits a different aspect of the data (generic, time structure, graph
structure). The fourth method is a nonparametric alternative in which we
directly learn a transform of a uniform random variable. All four methods are
trained in an unsupervised manner by maximizing the likelihood that the states
may have produced the observed measurements. Our computational experiments
demonstrate that learned sampling distributions exhibit better performance than
designed, minimum-degeneracy sampling distributions.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Stochastic Sampling from Deterministic Flow Models [8.849981177332594]
We present a method to turn flow models into a family of differential equations (SDEs) that have the same marginal distributions.
We empirically demonstrate advantages of our method on a toy Gaussian setup and on the large scale ImageNet generation task.
arXiv Detail & Related papers (2024-10-03T05:18:28Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Adversarial sampling of unknown and high-dimensional conditional
distributions [0.0]
In this paper the sampling method, as well as the inference of the underlying distribution, are handled with a data-driven method known as generative adversarial networks (GAN)
GAN trains two competing neural networks to produce a network that can effectively generate samples from the training set distribution.
It is shown that all the versions of the proposed algorithm effectively sample the target conditional distribution with minimal impact on the quality of the samples.
arXiv Detail & Related papers (2021-11-08T12:23:38Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Relative Entropy Gradient Sampler for Unnormalized Distributions [14.060615420986796]
Relative entropy gradient sampler (REGS) for sampling from unnormalized distributions.
REGS is a particle method that seeks a sequence of simple nonlinear transforms iteratively pushing the initial samples from a reference distribution into the samples from an unnormalized target distribution.
arXiv Detail & Related papers (2021-10-06T14:10:38Z) - Effective Proximal Methods for Non-convex Non-smooth Regularized
Learning [27.775096437736973]
We show that the independent sampling scheme tends to improve performance of the commonly-used uniform sampling scheme.
Our new analysis also derives a speed for the sampling than best one available so far.
arXiv Detail & Related papers (2020-09-14T16:41:32Z) - Stein Variational Inference for Discrete Distributions [70.19352762933259]
We propose a simple yet general framework that transforms discrete distributions to equivalent piecewise continuous distributions.
Our method outperforms traditional algorithms such as Gibbs sampling and discontinuous Hamiltonian Monte Carlo.
We demonstrate that our method provides a promising tool for learning ensembles of binarized neural network (BNN)
In addition, such transform can be straightforwardly employed in gradient-free kernelized Stein discrepancy to perform goodness-of-fit (GOF) test on discrete distributions.
arXiv Detail & Related papers (2020-03-01T22:45:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.