Differentiable Particle Filtering using Optimal Placement Resampling
- URL: http://arxiv.org/abs/2402.16639v1
- Date: Mon, 26 Feb 2024 15:09:56 GMT
- Title: Differentiable Particle Filtering using Optimal Placement Resampling
- Authors: Domonkos Csuzdi, Oliv\'er T\"or\H{o}, Tam\'as B\'ecsi
- Abstract summary: A good proposal distribution and a good resampling scheme are crucial to obtain low variance estimates.
This work proposes a differentiable resampling scheme by deterministic sampling from an empirical cumulative distribution function.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Particle filters are a frequent choice for inference tasks in nonlinear and
non-Gaussian state-space models. They can either be used for state inference by
approximating the filtering distribution or for parameter inference by
approximating the marginal data (observation) likelihood. A good proposal
distribution and a good resampling scheme are crucial to obtain low variance
estimates. However, traditional methods like multinomial resampling introduce
nondifferentiability in PF-based loss functions for parameter estimation,
prohibiting gradient-based learning tasks. This work proposes a differentiable
resampling scheme by deterministic sampling from an empirical cumulative
distribution function. We evaluate our method on parameter inference tasks and
proposal learning.
Related papers
- Semi-Implicit Functional Gradient Flow [30.32233517392456]
We propose a functional gradient ParVI method that uses perturbed particles as the approximation family.
The corresponding functional gradient flow, which can be estimated via denoising score matching, exhibits strong theoretical convergence guarantee.
arXiv Detail & Related papers (2024-10-23T15:00:30Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Transformer-based Parameter Estimation in Statistics [0.0]
We propose a transformer-based approach to parameter estimation.
It does not even require knowing the probability density function, which is needed by numerical methods.
It is shown that our approach achieves similar or better accuracy as measured by mean-square-errors.
arXiv Detail & Related papers (2024-02-28T04:30:41Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Unsupervised Learning of Sampling Distributions for Particle Filters [80.6716888175925]
We put forward four methods for learning sampling distributions from observed measurements.
Experiments demonstrate that learned sampling distributions exhibit better performance than designed, minimum-degeneracy sampling distributions.
arXiv Detail & Related papers (2023-02-02T15:50:21Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Differentiable Particle Filtering via Entropy-Regularized Optimal
Transport [19.556744028461004]
We introduce a principled differentiable particle filter and provide convergence results.
By leveraging optimal transport ideas, we introduce a principled differentiable particle filter and provide convergence results.
arXiv Detail & Related papers (2021-02-15T21:05:33Z) - Doubly Robust Semiparametric Difference-in-Differences Estimators with
High-Dimensional Data [15.27393561231633]
We propose a doubly robust two-stage semiparametric difference-in-difference estimator for estimating heterogeneous treatment effects.
The first stage allows a general set of machine learning methods to be used to estimate the propensity score.
In the second stage, we derive the rates of convergence for both the parametric parameter and the unknown function.
arXiv Detail & Related papers (2020-09-07T15:14:29Z) - Multiplicative Gaussian Particle Filter [18.615555573235987]
We propose a new sampling-based approach for approximate inference in filtering problems.
Instead of approximating conditional distributions with a finite set of states, as done in particle filters, our approach approximates the distribution with a weighted sum of functions from a set of continuous functions.
arXiv Detail & Related papers (2020-02-29T09:19:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.