Learning state and proposal dynamics in state-space models using differentiable particle filters and neural networks
- URL: http://arxiv.org/abs/2411.15638v1
- Date: Sat, 23 Nov 2024 19:30:56 GMT
- Title: Learning state and proposal dynamics in state-space models using differentiable particle filters and neural networks
- Authors: Benjamin Cox, Santiago Segarra, Victor Elvira,
- Abstract summary: We introduce a new method, StateMixNN, that uses a pair of neural networks to learn the proposal distribution and transition distribution of a particle filter.
Our method is trained targeting the log-likelihood, thereby requiring only the observation series.
The proposed method significantly improves recovery of the hidden state in comparison with the state-of-the-art, showing greater improvement in highly non-linear scenarios.
- Score: 25.103069515802538
- License:
- Abstract: State-space models are a popular statistical framework for analysing sequential data. Within this framework, particle filters are often used to perform inference on non-linear state-space models. We introduce a new method, StateMixNN, that uses a pair of neural networks to learn the proposal distribution and transition distribution of a particle filter. Both distributions are approximated using multivariate Gaussian mixtures. The component means and covariances of these mixtures are learnt as outputs of learned functions. Our method is trained targeting the log-likelihood, thereby requiring only the observation series, and combines the interpretability of state-space models with the flexibility and approximation power of artificial neural networks. The proposed method significantly improves recovery of the hidden state in comparison with the state-of-the-art, showing greater improvement in highly non-linear scenarios.
Related papers
- Learning to Approximate Particle Smoothing Trajectories via Diffusion Generative Models [16.196738720721417]
Learning systems from sparse observations is critical in numerous fields, including biology, finance, and physics.
We introduce a method that integrates conditional particle filtering with ancestral sampling and diffusion models.
We demonstrate the approach in time-series generation and tasks, including vehicle tracking and single-cell RNA sequencing data.
arXiv Detail & Related papers (2024-06-01T21:54:01Z) - Regime Learning for Differentiable Particle Filters [19.35021771863565]
Differentiable particle filters are an emerging class of models that combine sequential Monte Carlo techniques with the flexibility of neural networks to perform state space inference.
No prior approaches effectively learn both the individual regimes and the switching process simultaneously.
We propose the neural network based regime learning differentiable particle filter (RLPF) to address this problem.
arXiv Detail & Related papers (2024-05-08T07:43:43Z) - Normalising Flow-based Differentiable Particle Filters [19.09640071505051]
We present a differentiable particle filtering framework that uses (conditional) normalising flows to build its dynamic model, proposal distribution, and measurement model.
We derive the theoretical properties of the proposed filters and evaluate the proposed normalising flow-based differentiable particle filters' performance through a series of numerical experiments.
arXiv Detail & Related papers (2024-03-03T12:23:17Z) - Learning Differentiable Particle Filter on the Fly [18.466658684464598]
Differentiable particle filters are an emerging class of sequential Bayesian inference techniques.
We propose an online learning framework for differentiable particle filters so that model parameters can be updated as data arrive.
arXiv Detail & Related papers (2023-12-10T17:54:40Z) - Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - An overview of differentiable particle filters for data-adaptive
sequential Bayesian inference [19.09640071505051]
Particle filters (PFs) provide an efficient mechanism for solving non-linear sequential state estimation problems.
An emerging trend involves constructing components of particle filters using neural networks and optimising them by gradient descent.
Differentiable particle filters are a promising computational tool for performing inference on sequential data in complex, high-dimensional tasks.
arXiv Detail & Related papers (2023-02-19T18:03:53Z) - Unsupervised Learning of Sampling Distributions for Particle Filters [80.6716888175925]
We put forward four methods for learning sampling distributions from observed measurements.
Experiments demonstrate that learned sampling distributions exhibit better performance than designed, minimum-degeneracy sampling distributions.
arXiv Detail & Related papers (2023-02-02T15:50:21Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Deep Variational Models for Collaborative Filtering-based Recommender
Systems [63.995130144110156]
Deep learning provides accurate collaborative filtering models to improve recommender system results.
Our proposed models apply the variational concept to injectity in the latent space of the deep architecture.
Results show the superiority of the proposed approach in scenarios where the variational enrichment exceeds the injected noise effect.
arXiv Detail & Related papers (2021-07-27T08:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.