Learning Differentiable Particle Filter on the Fly
- URL: http://arxiv.org/abs/2312.05955v3
- Date: Sat, 16 Dec 2023 01:32:23 GMT
- Title: Learning Differentiable Particle Filter on the Fly
- Authors: Jiaxi Li, Xiongjie Chen, Yunpeng Li
- Abstract summary: Differentiable particle filters are an emerging class of sequential Bayesian inference techniques.
We propose an online learning framework for differentiable particle filters so that model parameters can be updated as data arrive.
- Score: 18.466658684464598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentiable particle filters are an emerging class of sequential Bayesian
inference techniques that use neural networks to construct components in state
space models. Existing approaches are mostly based on offline supervised
training strategies. This leads to the delay of the model deployment and the
obtained filters are susceptible to distribution shift of test-time data. In
this paper, we propose an online learning framework for differentiable particle
filters so that model parameters can be updated as data arrive. The technical
constraint is that there is no known ground truth state information in the
online inference setting. We address this by adopting an unsupervised loss to
construct the online model updating procedure, which involves a sequence of
filtering operations for online maximum likelihood-based parameter estimation.
We empirically evaluate the effectiveness of the proposed method, and compare
it with supervised learning methods in simulation settings including a
multivariate linear Gaussian state-space model and a simulated object tracking
experiment.
Related papers
- Permutation Invariant Learning with High-Dimensional Particle Filters [8.878254892409005]
Sequential learning in deep models often suffers from challenges such as catastrophic forgetting and loss of plasticity.
We introduce a novel permutation-invariant learning framework based on high-dimensional particle filters.
arXiv Detail & Related papers (2024-10-30T05:06:55Z) - Differentiable Interacting Multiple Model Particle Filtering [24.26220422457388]
We propose a sequential Monte Carlo algorithm for parameter learning when the studied model exhibits random discontinuous jumps in behaviour.
We adopt the emerging framework of differentiable particle filtering, wherein parameters are trained by gradient descent.
We establish new theoretical results of the presented algorithms and demonstrate superior numerical performance compared to the previous state-of-the-art algorithms.
arXiv Detail & Related papers (2024-10-01T12:05:18Z) - Regime Learning for Differentiable Particle Filters [19.35021771863565]
Differentiable particle filters are an emerging class of models that combine sequential Monte Carlo techniques with the flexibility of neural networks to perform state space inference.
No prior approaches effectively learn both the individual regimes and the switching process simultaneously.
We propose the neural network based regime learning differentiable particle filter (RLPF) to address this problem.
arXiv Detail & Related papers (2024-05-08T07:43:43Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Differentiable Bootstrap Particle Filters for Regime-Switching Models [43.03865620039904]
In real-world applications, both the state dynamics and measurements can switch between a set of candidate models.
This paper proposes a new differentiable particle filter for regime-switching state-space models.
The method can learn a set of unknown candidate dynamic and measurement models and track the state posteriors.
arXiv Detail & Related papers (2023-02-20T21:14:27Z) - Unsupervised Learning of Sampling Distributions for Particle Filters [80.6716888175925]
We put forward four methods for learning sampling distributions from observed measurements.
Experiments demonstrate that learned sampling distributions exhibit better performance than designed, minimum-degeneracy sampling distributions.
arXiv Detail & Related papers (2023-02-02T15:50:21Z) - Particle-Based Score Estimation for State Space Model Learning in
Autonomous Driving [62.053071723903834]
Multi-object state estimation is a fundamental problem for robotic applications.
We consider learning maximum-likelihood parameters using particle methods.
We apply our method to real data collected from autonomous vehicles.
arXiv Detail & Related papers (2022-12-14T01:21:05Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Multitarget Tracking with Transformers [21.81266872964314]
Multitarget Tracking (MTT) is a problem of tracking the states of an unknown number of objects using noisy measurements.
In this paper, we propose a high-performing deep-learning method for MTT based on the Transformer architecture.
arXiv Detail & Related papers (2021-04-01T19:14:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.