Toward Global Convergence of Gradient EM for Over-Parameterized Gaussian Mixture Models
- URL: http://arxiv.org/abs/2407.00490v1
- Date: Sat, 29 Jun 2024 16:44:29 GMT
- Title: Toward Global Convergence of Gradient EM for Over-Parameterized Gaussian Mixture Models
- Authors: Weihang Xu, Maryam Fazel, Simon S. Du,
- Abstract summary: We study the gradient Expectation-Maximization (EM) algorithm for Gaussian Mixture Models (GMM)
This is the first global convergence result for Gaussian mixtures with more than $2$ components.
- Score: 47.294535652946095
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We study the gradient Expectation-Maximization (EM) algorithm for Gaussian Mixture Models (GMM) in the over-parameterized setting, where a general GMM with $n>1$ components learns from data that are generated by a single ground truth Gaussian distribution. While results for the special case of 2-Gaussian mixtures are well-known, a general global convergence analysis for arbitrary $n$ remains unresolved and faces several new technical barriers since the convergence becomes sub-linear and non-monotonic. To address these challenges, we construct a novel likelihood-based convergence analysis framework and rigorously prove that gradient EM converges globally with a sublinear rate $O(1/\sqrt{t})$. This is the first global convergence result for Gaussian mixtures with more than $2$ components. The sublinear convergence rate is due to the algorithmic nature of learning over-parameterized GMM with gradient EM. We also identify a new emerging technical challenge for learning general over-parameterized GMM: the existence of bad local regions that can trap gradient EM for an exponential number of steps.
Related papers
- Unveiling the Cycloid Trajectory of EM Iterations in Mixed Linear Regression [5.883916678819683]
We study the trajectory of iterations and the convergence rates of the Expectation-Maximization (EM) algorithm for two-component Mixed Linear Regression (2MLR)
Recent results have established the super-linear convergence of EM for 2MLR in the noiseless and high SNR settings.
arXiv Detail & Related papers (2024-05-28T14:46:20Z) - Convergence of mean-field Langevin dynamics: Time and space
discretization, stochastic gradient, and variance reduction [49.66486092259376]
The mean-field Langevin dynamics (MFLD) is a nonlinear generalization of the Langevin dynamics that incorporates a distribution-dependent drift.
Recent works have shown that MFLD globally minimizes an entropy-regularized convex functional in the space of measures.
We provide a framework to prove a uniform-in-time propagation of chaos for MFLD that takes into account the errors due to finite-particle approximation, time-discretization, and gradient approximation.
arXiv Detail & Related papers (2023-06-12T16:28:11Z) - The Parametric Stability of Well-separated Spherical Gaussian Mixtures [7.238973585403367]
We quantify the parameter stability of a spherical Gaussian Mixture Model (sGMM) under small perturbations in distribution space.
We derive the first explicit bound to show that for a mixture of spherical Gaussian $P$ (sGMM) in a pre-defined model class, all other sGMM close to $P in this model class in total variation distance has a small parameter distance to $P.
arXiv Detail & Related papers (2023-02-01T04:52:13Z) - Learning Gaussian Mixtures with Generalised Linear Models: Precise
Asymptotics in High-dimensions [79.35722941720734]
Generalised linear models for multi-class classification problems are one of the fundamental building blocks of modern machine learning tasks.
We prove exacts characterising the estimator in high-dimensions via empirical risk minimisation.
We discuss how our theory can be applied beyond the scope of synthetic data.
arXiv Detail & Related papers (2021-06-07T16:53:56Z) - Leveraging Non-uniformity in First-order Non-convex Optimization [93.6817946818977]
Non-uniform refinement of objective functions leads to emphNon-uniform Smoothness (NS) and emphNon-uniform Lojasiewicz inequality (NL)
New definitions inspire new geometry-aware first-order methods that converge to global optimality faster than the classical $Omega (1/t2)$ lower bounds.
arXiv Detail & Related papers (2021-05-13T04:23:07Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - GAT-GMM: Generative Adversarial Training for Gaussian Mixture Models [29.42264360774606]
Generative adversarial networks (GANs) learn the distribution of observed samples through a zero-sum game.
We propose Gene Adversarial Gaussian Models (GAT-GMM), a minimax GANMs.
We show that GAT-GMM can perform as well as the expectation-maximization algorithm in learning mixtures of two Gaussians.
arXiv Detail & Related papers (2020-06-18T06:11:28Z) - Dual Stochastic Natural Gradient Descent and convergence of interior
half-space gradient approximations [0.0]
Multinomial logistic regression (MLR) is widely used in statistics and machine learning.
gradient descent (SGD) is the most common approach for determining the parameters of a MLR model in big data scenarios.
arXiv Detail & Related papers (2020-01-19T00:53:49Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.