Uniform Convergence Rates for Maximum Likelihood Estimation under
Two-Component Gaussian Mixture Models
- URL: http://arxiv.org/abs/2006.00704v1
- Date: Mon, 1 Jun 2020 04:13:48 GMT
- Title: Uniform Convergence Rates for Maximum Likelihood Estimation under
Two-Component Gaussian Mixture Models
- Authors: Tudor Manole, Nhat Ho
- Abstract summary: We derive uniform convergence rates for the maximum likelihood estimator and minimax lower bounds for parameter estimation.
We assume the mixing proportions of the mixture are known and fixed, but make no separation assumption on the underlying mixture components.
- Score: 13.769786711365104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We derive uniform convergence rates for the maximum likelihood estimator and
minimax lower bounds for parameter estimation in two-component location-scale
Gaussian mixture models with unequal variances. We assume the mixing
proportions of the mixture are known and fixed, but make no separation
assumption on the underlying mixture components. A phase transition is shown to
exist in the optimal parameter estimation rate, depending on whether or not the
mixture is balanced. Key to our analysis is a careful study of the dependence
between the parameters of location-scale Gaussian mixture models, as captured
through systems of polynomial equalities and inequalities whose solution set
drives the rates we obtain. A simulation study illustrates the theoretical
findings of this work.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - On the best approximation by finite Gaussian mixtures [7.084611118322622]
We consider the problem of approximating a general Gaussian location mixture by finite mixtures.
The minimum order of finite mixtures that achieve a prescribed accuracy is determined within constant factors.
arXiv Detail & Related papers (2024-04-13T06:57:44Z) - Robust scalable initialization for Bayesian variational inference with
multi-modal Laplace approximations [0.0]
Variational mixtures with full-covariance structures suffer from a quadratic growth due to variational parameters with the number of parameters.
We propose a method for constructing an initial Gaussian model approximation that can be used to warm-start variational inference.
arXiv Detail & Related papers (2023-07-12T19:30:04Z) - Statistical Efficiency of Score Matching: The View from Isoperimetry [96.65637602827942]
We show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated.
We formalize these results both in the sample regime and in the finite regime.
arXiv Detail & Related papers (2022-10-03T06:09:01Z) - Refined Convergence Rates for Maximum Likelihood Estimation under Finite
Mixture Models [13.769786711365104]
We revisit convergence rates for maximum likelihood estimation (MLE) under finite mixture models.
We show that a subset of the components of the penalized MLE typically converge significantly faster than could have been anticipated from past work.
arXiv Detail & Related papers (2022-02-17T17:46:40Z) - A Robust and Flexible EM Algorithm for Mixtures of Elliptical
Distributions with Missing Data [71.9573352891936]
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data.
A new EM algorithm is investigated for mixtures of elliptical distributions with the property of handling potential missing data.
Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data.
arXiv Detail & Related papers (2022-01-28T10:01:37Z) - A similarity-based Bayesian mixture-of-experts model [0.5156484100374058]
We present a new non-parametric mixture-of-experts model for multivariate regression problems.
Using a conditionally specified model, predictions for out-of-sample inputs are based on similarities to each observed data point.
Posterior inference is performed on the parameters of the mixture as well as the distance metric.
arXiv Detail & Related papers (2020-12-03T18:08:30Z) - Consistent Estimation of Identifiable Nonparametric Mixture Models from
Grouped Observations [84.81435917024983]
This work proposes an algorithm that consistently estimates any identifiable mixture model from grouped observations.
A practical implementation is provided for paired observations, and the approach is shown to outperform existing methods.
arXiv Detail & Related papers (2020-06-12T20:44:22Z) - Minimax Optimal Estimation of KL Divergence for Continuous Distributions [56.29748742084386]
Esting Kullback-Leibler divergence from identical and independently distributed samples is an important problem in various domains.
One simple and effective estimator is based on the k nearest neighbor between these samples.
arXiv Detail & Related papers (2020-02-26T16:37:37Z) - Algebraic and Analytic Approaches for Parameter Learning in Mixture
Models [66.96778152993858]
We present two different approaches for parameter learning in several mixture models in one dimension.
For some of these distributions, our results represent the first guarantees for parameter estimation.
arXiv Detail & Related papers (2020-01-19T05:10:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.