A Bayesian Filtering Algorithm for Gaussian Mixture Models
- URL: http://arxiv.org/abs/1705.05495v2
- Date: Fri, 30 Jun 2023 06:27:19 GMT
- Title: A Bayesian Filtering Algorithm for Gaussian Mixture Models
- Authors: Adrian G. Wills and Johannes Hendriks and Christopher Renton and Brett
Ninness
- Abstract summary: A class of state-space systems can be modelled via Gaussian mixtures.
The exact solution to this filtering problem involves an exponential growth in the number of mixture terms.
A square-root implementation of the unified algorithm is presented.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A Bayesian filtering algorithm is developed for a class of state-space
systems that can be modelled via Gaussian mixtures. In general, the exact
solution to this filtering problem involves an exponential growth in the number
of mixture terms and this is handled here by utilising a Gaussian mixture
reduction step after both the time and measurement updates. In addition, a
square-root implementation of the unified algorithm is presented and this
algorithm is profiled on several simulated systems. This includes the state
estimation for two non-linear systems that are strictly outside the class
considered in this paper.
Related papers
- A convergent scheme for the Bayesian filtering problem based on the Fokker--Planck equation and deep splitting [0.0]
A numerical scheme for approximating the nonlinear filtering density is introduced and its convergence rate is established.
For the prediction step, the scheme approximates the Fokker--Planck equation with a deep splitting scheme, and performs an exact update through Bayes' formula.
This results in a classical prediction-update filtering algorithm that operates online for new observation sequences post-training.
arXiv Detail & Related papers (2024-09-22T20:25:45Z) - Accelerated Inference for Partially Observed Markov Processes using Automatic Differentiation [4.872049174955585]
Automatic differentiation (AD) has driven recent advances in machine learning.
We show how to embed two existing AD particle filter methods in a theoretical framework that provides an extension to a new class of algorithms.
We develop likelihood algorithms suited to the Monte Carlo properties of the AD gradient estimate.
arXiv Detail & Related papers (2024-07-03T13:06:46Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Variational Gaussian filtering via Wasserstein gradient flows [6.023171219551961]
We present a novel approach to approximate Gaussian and mixture-of-Gaussians filtering.
Our method relies on a variational approximation via a gradient-flow representation.
arXiv Detail & Related papers (2023-03-11T12:22:35Z) - Learning Gaussian Mixtures Using the Wasserstein-Fisher-Rao Gradient
Flow [12.455057637445174]
We propose a new algorithm to compute the nonparametric maximum likelihood estimator (NPMLE) in a Gaussian mixture model.
Our method is based on gradient descent over the space of probability measures equipped with the Wasserstein-Fisher-Rao geometry.
We conduct extensive numerical experiments to confirm the effectiveness of the proposed algorithm.
arXiv Detail & Related papers (2023-01-04T18:59:35Z) - Overlap-guided Gaussian Mixture Models for Point Cloud Registration [61.250516170418784]
Probabilistic 3D point cloud registration methods have shown competitive performance in overcoming noise, outliers, and density variations.
This paper proposes a novel overlap-guided probabilistic registration approach that computes the optimal transformation from matched Gaussian Mixture Model (GMM) parameters.
arXiv Detail & Related papers (2022-10-17T08:02:33Z) - Gaussian mixture model on nodes of Bayesian network given maximal
parental cliques [0.0]
We will explain why and how we use Gaussian mixture models in network.
We propose a new method, called double iteration algorithm, to optimize the mixture model.
arXiv Detail & Related papers (2022-04-20T15:14:01Z) - A Robust and Flexible EM Algorithm for Mixtures of Elliptical
Distributions with Missing Data [71.9573352891936]
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data.
A new EM algorithm is investigated for mixtures of elliptical distributions with the property of handling potential missing data.
Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data.
arXiv Detail & Related papers (2022-01-28T10:01:37Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.