Generalized Bayesian Filtering via Sequential Monte Carlo
- URL: http://arxiv.org/abs/2002.09998v2
- Date: Wed, 21 Oct 2020 15:05:58 GMT
- Title: Generalized Bayesian Filtering via Sequential Monte Carlo
- Authors: Ayman Boustati, \"Omer Deniz Akyildiz, Theodoros Damoulas, Adam M.
Johansen
- Abstract summary: We introduce a framework for inference in general state-space hidden Markov models (HMMs) under likelihood misspecification.
We leverage the loss-theoretic perspective of Generalized Bayesian Inference (GBI) to define generalised filtering recursions in HMMs.
We observe improved performance over both standard filtering algorithms and other robust filters.
- Score: 12.789129084258409
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a framework for inference in general state-space hidden Markov
models (HMMs) under likelihood misspecification. In particular, we leverage the
loss-theoretic perspective of Generalized Bayesian Inference (GBI) to define
generalised filtering recursions in HMMs, that can tackle the problem of
inference under model misspecification. In doing so, we arrive at principled
procedures for robust inference against observation contamination by utilising
the $\beta$-divergence. Operationalising the proposed framework is made
possible via sequential Monte Carlo methods (SMC), where most standard particle
methods, and their associated convergence results, are readily adapted to the
new setting. We apply our approach to object tracking and Gaussian process
regression problems, and observe improved performance over both standard
filtering algorithms and other robust filters.
Related papers
- Sequential Kalman Monte Carlo for gradient-free inference in Bayesian inverse problems [1.3654846342364308]
We introduce Sequential Kalman Monte Carlo samplers to perform gradient-free inference in inverse problems.
FAKI employs normalizing flows to relax the Gaussian ansatz of the target measures in EKI.
FAKI alone is not able to correct for the model linearity assumptions in EKI.
arXiv Detail & Related papers (2024-07-10T15:56:30Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Non-Sequential Ensemble Kalman Filtering using Distributed Arrays [0.24578723416255752]
This work introduces a new, distributed implementation of the Ensemble Kalman Filter (EnKF)
It allows for non-sequential assimilation of large datasets in high-dimensional problems.
arXiv Detail & Related papers (2023-11-21T16:42:26Z) - A Unified Approach to Controlling Implicit Regularization via Mirror
Descent [18.536453909759544]
Mirror descent (MD) is a notable generalization of gradient descent (GD)
We show that MD can be implemented efficiently and enjoys fast convergence under suitable conditions.
arXiv Detail & Related papers (2023-06-24T03:57:26Z) - GEC: A Unified Framework for Interactive Decision Making in MDP, POMDP,
and Beyond [101.5329678997916]
We study sample efficient reinforcement learning (RL) under the general framework of interactive decision making.
We propose a novel complexity measure, generalized eluder coefficient (GEC), which characterizes the fundamental tradeoff between exploration and exploitation.
We show that RL problems with low GEC form a remarkably rich class, which subsumes low Bellman eluder dimension problems, bilinear class, low witness rank problems, PO-bilinear class, and generalized regular PSR.
arXiv Detail & Related papers (2022-11-03T16:42:40Z) - Higher Order Kernel Mean Embeddings to Capture Filtrations of Stochastic
Processes [11.277354787690646]
We introduce a family of higher order kernel mean embeddings that generalizes the notion of KME.
We derive empirical estimators for the associated higher order maximum mean discrepancies (MMDs) and prove consistency.
We construct a family of universal kernels on processes that allows to solve real-world calibration and optimal stopping problems.
arXiv Detail & Related papers (2021-09-08T12:27:25Z) - Consistency of regularized spectral clustering in degree-corrected mixed
membership model [1.0965065178451106]
We propose an efficient approach called mixed regularized spectral clustering (Mixed-RSC for short) based on the regularized Laplacian matrix.
Mixed-RSC is designed based on an ideal cone structure of the variant for the eigen-decomposition of the population regularized Laplacian matrix.
We show that the algorithm is consistent under mild conditions by providing error bounds for the inferred membership vector of each node.
arXiv Detail & Related papers (2020-11-23T02:30:53Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - Posterior Differential Regularization with f-divergence for Improving
Model Robustness [95.05725916287376]
We focus on methods that regularize the model posterior difference between clean and noisy inputs.
We generalize the posterior differential regularization to the family of $f$-divergences.
Our experiments show that regularizing the posterior differential with $f$-divergence can result in well-improved model robustness.
arXiv Detail & Related papers (2020-10-23T19:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.