Regime Learning for Differentiable Particle Filters
- URL: http://arxiv.org/abs/2405.04865v3
- Date: Wed, 12 Jun 2024 10:05:11 GMT
- Title: Regime Learning for Differentiable Particle Filters
- Authors: John-Joseph Brady, Yuhui Luo, Wenwu Wang, Victor Elvira, Yunpeng Li,
- Abstract summary: Differentiable particle filters are an emerging class of models that combine sequential Monte Carlo techniques with the flexibility of neural networks to perform state space inference.
No prior approaches effectively learn both the individual regimes and the switching process simultaneously.
We propose the neural network based regime learning differentiable particle filter (RLPF) to address this problem.
- Score: 19.35021771863565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentiable particle filters are an emerging class of models that combine sequential Monte Carlo techniques with the flexibility of neural networks to perform state space inference. This paper concerns the case where the system may switch between a finite set of state-space models, i.e. regimes. No prior approaches effectively learn both the individual regimes and the switching process simultaneously. In this paper, we propose the neural network based regime learning differentiable particle filter (RLPF) to address this problem. We further design a training procedure for the RLPF and other related algorithms. We demonstrate competitive performance compared to the previous state-of-the-art algorithms on a pair of numerical experiments.
Related papers
- Learning state and proposal dynamics in state-space models using differentiable particle filters and neural networks [25.103069515802538]
We introduce a new method, StateMixNN, that uses a pair of neural networks to learn the proposal distribution and transition distribution of a particle filter.
Our method is trained targeting the log-likelihood, thereby requiring only the observation series.
The proposed method significantly improves recovery of the hidden state in comparison with the state-of-the-art, showing greater improvement in highly non-linear scenarios.
arXiv Detail & Related papers (2024-11-23T19:30:56Z) - Permutation Invariant Learning with High-Dimensional Particle Filters [8.878254892409005]
Sequential learning in deep models often suffers from challenges such as catastrophic forgetting and loss of plasticity.
We introduce a novel permutation-invariant learning framework based on high-dimensional particle filters.
arXiv Detail & Related papers (2024-10-30T05:06:55Z) - Differentiable Interacting Multiple Model Particle Filtering [24.26220422457388]
We propose a sequential Monte Carlo algorithm for parameter learning when the studied model exhibits random discontinuous jumps in behaviour.
We adopt the emerging framework of differentiable particle filtering, wherein parameters are trained by gradient descent.
We establish new theoretical results of the presented algorithms and demonstrate superior numerical performance compared to the previous state-of-the-art algorithms.
arXiv Detail & Related papers (2024-10-01T12:05:18Z) - Learning Differentiable Particle Filter on the Fly [18.466658684464598]
Differentiable particle filters are an emerging class of sequential Bayesian inference techniques.
We propose an online learning framework for differentiable particle filters so that model parameters can be updated as data arrive.
arXiv Detail & Related papers (2023-12-10T17:54:40Z) - Differentiable Bootstrap Particle Filters for Regime-Switching Models [43.03865620039904]
In real-world applications, both the state dynamics and measurements can switch between a set of candidate models.
This paper proposes a new differentiable particle filter for regime-switching state-space models.
The method can learn a set of unknown candidate dynamic and measurement models and track the state posteriors.
arXiv Detail & Related papers (2023-02-20T21:14:27Z) - Understanding the Covariance Structure of Convolutional Filters [86.0964031294896]
Recent ViT-inspired convolutional networks such as ConvMixer and ConvNeXt use large-kernel depthwise convolutions with notable structure.
We first observe that such learned filters have highly-structured covariance matrices, and we find that covariances calculated from small networks may be used to effectively initialize a variety of larger networks.
arXiv Detail & Related papers (2022-10-07T15:59:13Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Learning Versatile Convolution Filters for Efficient Visual Recognition [125.34595948003745]
This paper introduces versatile filters to construct efficient convolutional neural networks.
We conduct theoretical analysis on network complexity and an efficient convolution scheme is introduced.
Experimental results on benchmark datasets and neural networks demonstrate that our versatile filters are able to achieve comparable accuracy as that of original filters.
arXiv Detail & Related papers (2021-09-20T06:07:14Z) - When is Particle Filtering Efficient for Planning in Partially Observed
Linear Dynamical Systems? [60.703816720093016]
This paper initiates a study on the efficiency of particle filtering for sequential planning.
We are able to bound the number of particles needed so that the long-run reward of the policy based on particle filtering is close to that based on exact inference.
We believe this technique can be useful in other sequential decision-making problems.
arXiv Detail & Related papers (2020-06-10T17:43:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.