Gaussian mixture model on nodes of Bayesian network given maximal
parental cliques
- URL: http://arxiv.org/abs/2204.09532v1
- Date: Wed, 20 Apr 2022 15:14:01 GMT
- Title: Gaussian mixture model on nodes of Bayesian network given maximal
parental cliques
- Authors: Yiran Dong Chuanhou Gao
- Abstract summary: We will explain why and how we use Gaussian mixture models in network.
We propose a new method, called double iteration algorithm, to optimize the mixture model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper uses Gaussian mixture model instead of linear Gaussian model to
fit the distribution of every node in Bayesian network. We will explain why and
how we use Gaussian mixture models in Bayesian network. Meanwhile we propose a
new method, called double iteration algorithm, to optimize the mixture model,
the double iteration algorithm combines the expectation maximization algorithm
and gradient descent algorithm, and it performs perfectly on the Bayesian
network with mixture models. In experiments we test the Gaussian mixture model
and the optimization algorithm on different graphs which is generated by
different structure learning algorithm on real data sets, and give the details
of every experiment.
Related papers
- Learning general Gaussian mixtures with efficient score matching [16.06356123715737]
We study the problem of learning mixtures of $k$ Gaussians in $d$ dimensions.
We make no separation assumptions on the underlying mixture components.
We give an algorithm that draws $dmathrmpoly(k/varepsilon)$ samples from the target mixture, runs in sample-polynomial time, and constructs a sampler.
arXiv Detail & Related papers (2024-04-29T17:30:36Z) - Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - Universal Lower Bounds and Optimal Rates: Achieving Minimax Clustering Error in Sub-Exponential Mixture Models [8.097200145973389]
We first establish a universal lower bound for the error rate in clustering any mixture model.
We then demonstrate that iterative algorithms attain this lower bound in mixture models with sub-exponential tails.
For datasets better modelled by Poisson or Negative Binomial mixtures, we study mixture models whose distributions belong to an exponential family.
In such mixtures, we establish that Bregman hard clustering, a variant of Lloyd's algorithm employing a Bregman divergence, is rate optimal.
arXiv Detail & Related papers (2024-02-23T16:51:17Z) - An Efficient 1 Iteration Learning Algorithm for Gaussian Mixture Model
And Gaussian Mixture Embedding For Neural Network [2.261786383673667]
The new algorithm brings more robustness and simplicity than classic Expectation Maximization (EM) algorithm.
It also improves the accuracy and only take 1 iteration for learning.
arXiv Detail & Related papers (2023-08-18T10:17:59Z) - A Robust and Flexible EM Algorithm for Mixtures of Elliptical
Distributions with Missing Data [71.9573352891936]
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data.
A new EM algorithm is investigated for mixtures of elliptical distributions with the property of handling potential missing data.
Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data.
arXiv Detail & Related papers (2022-01-28T10:01:37Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Disentangling the Gauss-Newton Method and Approximate Inference for
Neural Networks [96.87076679064499]
We disentangle the generalized Gauss-Newton and approximate inference for Bayesian deep learning.
We find that the Gauss-Newton method simplifies the underlying probabilistic model significantly.
The connection to Gaussian processes enables new function-space inference algorithms.
arXiv Detail & Related papers (2020-07-21T17:42:58Z) - Overcomplete order-3 tensor decomposition, blind deconvolution and
Gaussian mixture models [1.7970523486905976]
We propose a new algorithm for tensor decomposition, based on Jennrich's algorithm, and apply our new algorithmic ideas to blind deconvolution and Gaussian mixture models.
Our first contribution is a simple and efficient algorithm to decompose certain symmetric overcomplete order-3 tensors, that is, three dimensional arrays of the form $T = sum_i=1n a_i otimes a_i otimes a_i$ where the $a_i$s are not linearly independent.
Our second contribution builds on top of our tensor decomposition algorithm to expand the family of
arXiv Detail & Related papers (2020-07-16T06:23:37Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z) - A Bayesian Filtering Algorithm for Gaussian Mixture Models [0.0]
A class of state-space systems can be modelled via Gaussian mixtures.
The exact solution to this filtering problem involves an exponential growth in the number of mixture terms.
A square-root implementation of the unified algorithm is presented.
arXiv Detail & Related papers (2017-05-16T01:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.