Modeling Expert Interactions in Sparse Mixture of Experts via Graph Structures
- URL: http://arxiv.org/abs/2510.16411v1
- Date: Sat, 18 Oct 2025 09:03:28 GMT
- Title: Modeling Expert Interactions in Sparse Mixture of Experts via Graph Structures
- Authors: Minh-Khoi Nguyen-Nhat, Rachel S. Y. Teo, Laziz Abdullaev, Maurice Mok, Viet-Hoang Tran, Tan Minh Nguyen,
- Abstract summary: We introduce SymphonySMoE, a novel family of SMoE that introduces a social graph to model interactions among experts.<n> SymphonySMoE is lightweight, modular, and integrates seamlessly with existing SMoE-based models.
- Score: 19.516704475811522
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse Mixture of Experts (SMoE) has emerged as a promising solution to achieving unparalleled scalability in deep learning by decoupling model parameter count from computational cost. By activating only a small subset of parameters per sample, SMoE enables significant growth in model capacity while maintaining efficiency. However, SMoE struggles to adapt to distributional shifts, leading to reduced robustness under data contamination. In this work, we introduce SymphonySMoE, a novel family of SMoE that introduces a social graph to model interactions among experts. This graph-based structure enhances the token routing process, addressing the robustness challenges that are inherent in conventional SMoE designs. SymphonySMoE is lightweight, modular, and integrates seamlessly with existing SMoE-based models such as the XMoE and the Generalist Language Model. We provide both theoretical analysis and empirical evidence demonstrating SymphonySMoE's advantages over baseline SMoE. Extensive experiments on language modeling and visual instruction tuning validate our method's effectiveness. We further highlight the scalability of SymphonySMoE to models with 4.2 and 7.4 billion parameters, showcasing its applicability in fine-tuning tasks for large-scale systems.
Related papers
- Training Matryoshka Mixture-of-Experts for Elastic Inference-Time Expert Utilization [60.309915093470416]
Matryoshka MoE (M-MoE) is a training framework that instills a coarse-to-fine structure directly into the expert ensemble.<n>Our work paves the way for more practical and adaptable deployments of large-scale MoE models.
arXiv Detail & Related papers (2025-09-30T16:56:44Z) - MoLAE: Mixture of Latent Experts for Parameter-Efficient Language Models [10.623996218106564]
Mixture of Experts (MoE) has become a key architectural paradigm for efficiently scaling Large Language Models (LLMs)<n>We introduce MoLAE, a novel parameterization that reformulating expert operations through a shared projection into a lower-dimensional latent space, followed by expert-specific transformations.<n>We show that MoLAE significantly improves efficiency across multiple dimensions while preserving model capabilities.
arXiv Detail & Related papers (2025-03-29T14:35:34Z) - DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs [86.76714527437383]
This paper proposes DSMoE, a novel approach that achieves sparsification by partitioning pre-trained FFN layers into computational blocks.<n>We implement adaptive expert routing using sigmoid activation and straight-through estimators, enabling tokens to flexibly access different aspects of model knowledge.<n>Experiments on LLaMA models demonstrate that under equivalent computational constraints, DSMoE achieves superior performance compared to existing pruning and MoE approaches.
arXiv Detail & Related papers (2025-02-18T02:37:26Z) - MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts [2.1605931466490795]
We propose a new family of SMoEs named MomentumSMoE.
We prove and numerically demonstrate that MomentumSMoE is more stable and robust than SMoE.
We demonstrate the applicability of MomentumSMoE to many types of Sparse MoE models, including those in the Sparse MoE model for vision (V-MoE) and the Generalist Language Model (GLaM)
arXiv Detail & Related papers (2024-10-18T16:20:22Z) - Retraining-Free Merging of Sparse MoE via Hierarchical Clustering [14.858134039539697]
This paper introduces Hierarchical Clustering for Sparsely activated Mixture of Experts (HC-SMoE)<n>HC-SMoE is a task-agnostic expert merging framework for parameter reduction without retraining.<n>We provide theoretical analysis and evaluations across multiple zero-shot language tasks to demonstrate HC-SMoE's effectiveness in state-of-the-art models including Qwen and Mixtral.
arXiv Detail & Related papers (2024-10-11T07:36:14Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - SimSMoE: Solving Representational Collapse via Similarity Measure [34.20340688374905]
Sparse mixture of experts (SMoE) have emerged as an effective approach for scaling large language models while keeping a constant computational cost.
We present Similarity-based Sparse Mixture of Experts (SimSMoE), a novel similarity of neural network algorithm, that guarantees a solution to the representation collapse issue.
arXiv Detail & Related papers (2024-06-22T16:10:45Z) - On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.