An Efficient 1 Iteration Learning Algorithm for Gaussian Mixture Model
And Gaussian Mixture Embedding For Neural Network
- URL: http://arxiv.org/abs/2308.09444v2
- Date: Wed, 6 Sep 2023 07:53:46 GMT
- Title: An Efficient 1 Iteration Learning Algorithm for Gaussian Mixture Model
And Gaussian Mixture Embedding For Neural Network
- Authors: Weiguo Lu, Xuan Wu, Deng Ding, Gangnan Yuan
- Abstract summary: The new algorithm brings more robustness and simplicity than classic Expectation Maximization (EM) algorithm.
It also improves the accuracy and only take 1 iteration for learning.
- Score: 2.261786383673667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an Gaussian Mixture Model (GMM) learning algorithm, based on our
previous work of GMM expansion idea. The new algorithm brings more robustness
and simplicity than classic Expectation Maximization (EM) algorithm. It also
improves the accuracy and only take 1 iteration for learning. We theoretically
proof that this new algorithm is guarantee to converge regardless the
parameters initialisation. We compare our GMM expansion method with classic
probability layers in neural network leads to demonstrably better capability to
overcome data uncertainty and inverse problem. Finally, we test GMM based
generator which shows a potential to build further application that able to
utilized distribution random sampling for stochastic variation as well as
variation control.
Related papers
- Cramer Type Distances for Learning Gaussian Mixture Models by Gradient
Descent [0.0]
As of today, few known algorithms can fit or learn Gaussian mixture models.
We propose a distance function called Sliced Cram'er 2-distance for learning general multivariate GMMs.
These features are especially useful for distributional reinforcement learning and Deep Q Networks.
arXiv Detail & Related papers (2023-07-13T13:43:02Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - DeepGMR: Learning Latent Gaussian Mixture Models for Registration [113.74060941036664]
Point cloud registration is a fundamental problem in 3D computer vision, graphics and robotics.
In this paper, we introduce Deep Gaussian Mixture Registration (DeepGMR), the first learning-based registration method.
Our proposed method shows favorable performance when compared with state-of-the-art geometry-based and learning-based registration methods.
arXiv Detail & Related papers (2020-08-20T17:25:16Z) - Disentangling the Gauss-Newton Method and Approximate Inference for
Neural Networks [96.87076679064499]
We disentangle the generalized Gauss-Newton and approximate inference for Bayesian deep learning.
We find that the Gauss-Newton method simplifies the underlying probabilistic model significantly.
The connection to Gaussian processes enables new function-space inference algorithms.
arXiv Detail & Related papers (2020-07-21T17:42:58Z) - GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models [29.42264360774606]
Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game.
We propose Gene Adversarial Gaussian Models (GAT-GMM), a minimax GANMs.
We show that GAT-GMM can perform as well as the expectation-maximization algorithm in learning mixtures of two Gaussians.
arXiv Detail & Related papers (2020-06-18T06:11:28Z) - Gaussian Process Boosting [13.162429430481982]
We introduce a novel way to combine boosting with Gaussian process and mixed effects models.
We obtain increased prediction accuracy compared to existing approaches on simulated and real-world data sets.
arXiv Detail & Related papers (2020-04-06T13:19:54Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.