Dendrograms of Mixing Measures for Softmax-Gated Gaussian Mixture of Experts: Consistency without Model Sweeps
- URL: http://arxiv.org/abs/2510.12744v1
- Date: Tue, 14 Oct 2025 17:23:44 GMT
- Title: Dendrograms of Mixing Measures for Softmax-Gated Gaussian Mixture of Experts: Consistency without Model Sweeps
- Authors: Do Tien Hai, Trung Nguyen Mai, TrungTin Nguyen, Nhat Ho, Binh T. Nguyen, Christopher Drovandi,
- Abstract summary: Non-identifiability of gating parameters up to common translations, intrinsic gate-expert interactions, and tight numerator-denominator coupling are addressed.<n>For model selection, we adapt dendrogram-guided SGMoE, yielding a consistent, sweep-free selector of the number of experts that attains optimal parameter rates.<n>On a dataset of drought-identifiable maize traits, our dendrogram-guided SGMoE selects two experts, exposes a clear mixing hierarchy, stabilizes the likelihood early, and yields interpretable genotype-phenotype maps.
- Score: 41.371172458797524
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop a unified statistical framework for softmax-gated Gaussian mixture of experts (SGMoE) that addresses three long-standing obstacles in parameter estimation and model selection: (i) non-identifiability of gating parameters up to common translations, (ii) intrinsic gate-expert interactions that induce coupled differential relations in the likelihood, and (iii) the tight numerator-denominator coupling in the softmax-induced conditional density. Our approach introduces Voronoi-type loss functions aligned with the gate-partition geometry and establishes finite-sample convergence rates for the maximum likelihood estimator (MLE). In over-specified models, we reveal a link between the MLE's convergence rate and the solvability of an associated system of polynomial equations characterizing near-nonidentifiable directions. For model selection, we adapt dendrograms of mixing measures to SGMoE, yielding a consistent, sweep-free selector of the number of experts that attains pointwise-optimal parameter rates under overfitting while avoiding multi-size training. Simulations on synthetic data corroborate the theory, accurately recovering the expert count and achieving the predicted rates for parameter estimation while closely approximating the regression function. Under model misspecification (e.g., $\epsilon$-contamination), the dendrogram selection criterion is robust, recovering the true number of mixture components, while the Akaike information criterion, the Bayesian information criterion, and the integrated completed likelihood tend to overselect as sample size grows. On a maize proteomics dataset of drought-responsive traits, our dendrogram-guided SGMoE selects two experts, exposes a clear mixing-measure hierarchy, stabilizes the likelihood early, and yields interpretable genotype-phenotype maps, outperforming standard criteria without multi-size training.
Related papers
- Mixture-of-experts Wishart model for covariance matrices with an application to Cancer drug screening [0.8594140167290097]
We introduce a comprehensive Bayesian framework for analyzing heterogeneous covariance data through both classical mixture models and a novel mixture-of-experts Wishart model.<n>We develop an efficient Gibbs-within-Metropolis-Hastings sampling algorithm tailored to the geometry of the Wishart likelihood and the gating network.<n>We present an innovative application of our methodology to a challenging dataset: cancer drug sensitivity profiles.
arXiv Detail & Related papers (2026-02-14T21:07:14Z) - Fast Model Selection and Stable Optimization for Softmax-Gated Multinomial-Logistic Mixture of Experts Models [40.216463162163976]
We develop a batch minorization-maximization algorithm for softmax-gated multinomial-logistic MoE.<n>We also prove finite-sample rates for conditional density estimation and parameter recovery.<n>Experiments on biological protein--protein interaction prediction validate the full pipeline.
arXiv Detail & Related papers (2026-02-08T14:45:41Z) - Rethinking Multinomial Logistic Mixture of Experts with Sigmoid Gating Function [84.47276999832135]
We show that the sigmoid gate exhibits a lower sample complexity than the softmax gate for both parameter and expert estimation.<n>We find that incorporating a temperature into the sigmoid gate leads to a sample complexity of exponential order due to an intrinsic interaction between the temperature and gating parameters.
arXiv Detail & Related papers (2026-02-01T22:19:16Z) - Improving Minimax Estimation Rates for Contaminated Mixture of Multinomial Logistic Experts via Expert Heterogeneity [49.809923981964715]
Contaminated mixture of experts (MoE) is motivated by transfer learning methods where a pre-trained model, acting as a frozen expert, is integrated with an adapter model, functioning as a trainable expert, in order to learn a new task.<n>In this work, we characterize uniform convergence rates for estimating parameters under challenging settings where ground-truth parameters vary with the sample size.<n>We also establish corresponding minimax lower bounds to ensure that these rates are minimax optimal.
arXiv Detail & Related papers (2026-01-31T23:45:50Z) - Model Selection for Gaussian-gated Gaussian Mixture of Experts Using Dendrograms of Mixing Measures [24.865197779389323]
Mixture of Experts (MoE) models constitute a widely utilized class of ensemble learning approaches in statistics and machine learning.<n>We introduce a novel extension to Gaussian-gated MoE models that enables consistent estimation of the true number of mixture components.<n> Experimental results on synthetic data demonstrate the effectiveness of the proposed method in accurately recovering the number of experts.
arXiv Detail & Related papers (2025-05-19T12:41:19Z) - Quantifying predictive uncertainty of aphasia severity in stroke patients with sparse heteroscedastic Bayesian high-dimensional regression [47.1405366895538]
Sparse linear regression methods for high-dimensional data commonly assume that residuals have constant variance, which can be violated in practice.
This paper proposes estimating high-dimensional heteroscedastic linear regression models using a heteroscedastic partitioned empirical Bayes Expectation Conditional Maximization algorithm.
arXiv Detail & Related papers (2023-09-15T22:06:29Z) - A non-asymptotic model selection in block-diagonal mixture of polynomial
experts models [1.491109220586182]
We introduce a penalized maximum likelihood selection criterion to estimate the unknown conditional density of a regression model.
We provide a strong theoretical guarantee, including a finite-sample oracle satisfied by the penalized maximum likelihood with a Jensen-Kullback-Leibler type loss.
arXiv Detail & Related papers (2021-04-18T21:32:20Z) - Jointly Modeling and Clustering Tensors in High Dimensions [6.072664839782975]
We consider the problem of jointly benchmarking and clustering of tensors.
We propose an efficient high-maximization algorithm that converges geometrically to a neighborhood that is within statistical precision.
arXiv Detail & Related papers (2021-04-15T21:06:16Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Estimation of Switched Markov Polynomial NARX models [75.91002178647165]
We identify a class of models for hybrid dynamical systems characterized by nonlinear autoregressive (NARX) components.
The proposed approach is demonstrated on a SMNARX problem composed by three nonlinear sub-models with specific regressors.
arXiv Detail & Related papers (2020-09-29T15:00:47Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Non-asymptotic oracle inequalities for the Lasso in high-dimensional mixture of experts [2.794896499906838]
We consider the class of softmax-gated Gaussian MoE (SGMoE) models with softmax gating functions and Gaussian experts.
To the best of our knowledge, we are the first to investigate the $l_1$-regularization properties of SGMoE models from a non-asymptotic perspective.
We provide a lower bound on the regularization parameter of the Lasso penalty that ensures non-asymptotic theoretical control of the Kullback--Leibler loss of the Lasso estimator for SGMoE models.
arXiv Detail & Related papers (2020-09-22T15:23:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.