A New Probabilistic Distance Metric With Application In Gaussian Mixture
Reduction
- URL: http://arxiv.org/abs/2306.07309v1
- Date: Mon, 12 Jun 2023 17:50:09 GMT
- Title: A New Probabilistic Distance Metric With Application In Gaussian Mixture
Reduction
- Authors: Ahmad Sajedi, Yuri A. Lawryshyn, and Konstantinos N. Plataniotis
- Abstract summary: This paper presents a new distance metric to compare two continuous probability density functions.
The main advantage of this metric is that it can provide an analytic, closed-form expression for a mixture of Gaussian distributions.
- Score: 25.07521362926412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a new distance metric to compare two continuous
probability density functions. The main advantage of this metric is that,
unlike other statistical measurements, it can provide an analytic, closed-form
expression for a mixture of Gaussian distributions while satisfying all metric
properties. These characteristics enable fast, stable, and efficient
calculations, which are highly desirable in real-world signal processing
applications. The application in mind is Gaussian Mixture Reduction (GMR),
which is widely used in density estimation, recursive tracking, and belief
propagation. To address this problem, we developed a novel algorithm dubbed the
Optimization-based Greedy GMR (OGGMR), which employs our metric as a criterion
to approximate a high-order Gaussian mixture with a lower order. Experimental
results show that the OGGMR algorithm is significantly faster and more
efficient than state-of-the-art GMR algorithms while retaining the geometric
shape of the original mixture.
Related papers
- Learning Gaussian Mixtures Using the Wasserstein-Fisher-Rao Gradient
Flow [12.455057637445174]
We propose a new algorithm to compute the nonparametric maximum likelihood estimator (NPMLE) in a Gaussian mixture model.
Our method is based on gradient descent over the space of probability measures equipped with the Wasserstein-Fisher-Rao geometry.
We conduct extensive numerical experiments to confirm the effectiveness of the proposed algorithm.
arXiv Detail & Related papers (2023-01-04T18:59:35Z) - A Robust and Flexible EM Algorithm for Mixtures of Elliptical
Distributions with Missing Data [71.9573352891936]
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data.
A new EM algorithm is investigated for mixtures of elliptical distributions with the property of handling potential missing data.
Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data.
arXiv Detail & Related papers (2022-01-28T10:01:37Z) - Information Theoretic Structured Generative Modeling [13.117829542251188]
A novel generative model framework called the structured generative model (SGM) is proposed that makes straightforward optimization possible.
The implementation employs a single neural network driven by an orthonormal input to a single white noise source adapted to learn an infinite Gaussian mixture model.
Preliminary results show that SGM significantly improves MINE estimation in terms of data efficiency and variance, conventional and variational Gaussian mixture models, as well as for training adversarial networks.
arXiv Detail & Related papers (2021-10-12T07:44:18Z) - A Stochastic Newton Algorithm for Distributed Convex Optimization [62.20732134991661]
We analyze a Newton algorithm for homogeneous distributed convex optimization, where each machine can calculate gradients of the same population objective.
We show that our method can reduce the number, and frequency, of required communication rounds compared to existing methods without hurting performance.
arXiv Detail & Related papers (2021-10-07T17:51:10Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z) - Gaussian Mixture Reduction with Composite Transportation Divergence [15.687740538194413]
We propose a novel optimization-based GMR method based on composite transportation divergence (CTD)
We develop a majorization-minimization algorithm for computing the reduced mixture and establish its theoretical convergence.
Our unified framework empowers users to select the most appropriate cost function in CTD to achieve superior performance.
arXiv Detail & Related papers (2020-02-19T19:52:17Z) - A fast and efficient Modal EM algorithm for Gaussian mixtures [0.0]
In the modal approach to clustering, clusters are defined as the local maxima of the underlying probability density function.
The Modal EM algorithm is an iterative procedure that can identify the local maxima of any density function.
arXiv Detail & Related papers (2020-02-10T08:34:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.