Dynamic Multi-Expert Projectors with Stabilized Routing for Multilingual Speech Recognition
- URL: http://arxiv.org/abs/2601.19451v1
- Date: Tue, 27 Jan 2026 10:37:03 GMT
- Title: Dynamic Multi-Expert Projectors with Stabilized Routing for Multilingual Speech Recognition
- Authors: Isha Pandey, Ashish Mittal, Vartul Bahuguna, Ganesh Ramakrishnan,
- Abstract summary: SMEAR-MoE is a stabilized Mixture-of-Experts projector.<n>It delivers upto a 7.6% relative WER reduction over the single-projector baseline.<n>These results demonstrate that stable multi-expert projectors are key to scalable and robust multilingual ASR.
- Score: 12.734282414649682
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in LLM-based ASR connect frozen speech encoders with Large Language Models (LLMs) via lightweight projectors. While effective in monolingual settings, a single projector struggles to capture the diverse acoustic-to-semantic mappings required for multilingual ASR. To address this, we propose SMEAR-MoE, a stabilized Mixture-of-Experts projector that ensures dense gradient flow to all experts, preventing expert collapse while enabling cross-lingual sharing. We systematically compare monolithic, static multi-projector, and dynamic MoE designs across four Indic languages (Hindi, Marathi, Tamil, Telugu). Our SMEAR-MoE achieves strong performance, delivering upto a 7.6% relative WER reduction over the single-projector baseline, while maintaining comparable runtime efficiency. Analysis of expert routing further shows linguistically meaningful specialization, with related languages sharing experts. These results demonstrate that stable multi-expert projectors are key to scalable and robust multilingual ASR.
Related papers
- Understanding Multilingualism in Mixture-of-Experts LLMs: Routing Mechanism, Expert Specialization, and Layerwise Steering [61.0787902713059]
We propose a routing-guided steering method that adaptively guides routing behavior in middle layers toward shared experts associated with dominant languages at inference time.<n>Our code is available at http://conctsai.com/multilingualism-in-Mixture-of-Experts-LLMs.
arXiv Detail & Related papers (2026-01-20T15:04:25Z) - MultiPL-MoE: Multi-Programming-Lingual Extension of Large Language Models through Hybrid Mixture-of-Experts [56.106778414865126]
MultiPL-MoE is a hybrid mixture of experts and token-level experts.<n>The segment-level MoE incorporates two innovative designs to better capture the syntactic structure and contextual patterns of programming languages.
arXiv Detail & Related papers (2025-08-22T06:24:52Z) - Efficient Multilingual ASR Finetuning via LoRA Language Experts [59.27778147311189]
This paper proposes an efficient finetuning framework for customized multilingual ASR via prepared LoRA language experts based on Whisper.<n>Through LoRA expert fusion or knowledge distillation, our approach achieves better recognition performance on target languages than standard fine-tuning methods.<n> Experimental results demonstrate that the proposed models yield approximately 10% and 15% relative performance gains in language-aware and language-agnostic scenarios.
arXiv Detail & Related papers (2025-06-11T07:06:27Z) - Multi-SpatialMLLM: Multi-Frame Spatial Understanding with Multi-Modal Large Language Models [70.41727912081463]
Multi-modal large language models (MLLMs) have rapidly advanced in visual tasks, yet their spatial understanding remains limited to single images.<n>We propose a framework to equip MLLMs with robust multi-frame spatial understanding by integrating depth perception, visual correspondence, and dynamic perception.<n>Our model, Multi-SpatialMLLM, achieves significant gains over baselines and proprietary systems, demonstrating scalable, generalizable multi-frame reasoning.
arXiv Detail & Related papers (2025-05-22T17:59:39Z) - Scaling and Enhancing LLM-based AVSR: A Sparse Mixture of Projectors Approach [37.690797152736465]
Llama-SMoP employs a Sparse Mixture of Projectors (SMoP) module to scale model capacity without increasing inference costs.<n>It achieves superior performance on ASR, VSR, and AVSR tasks.
arXiv Detail & Related papers (2025-05-20T13:20:55Z) - LUSIFER: Language Universal Space Integration for Enhanced Multilingual Embeddings with Large Language Models [89.13128402847943]
We present LUSIFER, a novel zero-shot approach that adapts LLM-based embedding models for multilingual tasks without requiring multilingual supervision.<n>LUSIFER's architecture combines a multilingual encoder, serving as a language-universal learner, with an LLM-based embedding model optimized for embedding-specific tasks.<n>We introduce a new benchmark encompassing 5 primary embedding tasks, 123 diverse datasets, and coverage across 14 languages.
arXiv Detail & Related papers (2025-01-01T15:43:07Z) - Boosting Code-Switching ASR with Mixture of Experts Enhanced Speech-Conditioned LLM [1.3089936156875277]
We introduce a speech-conditioned Large Language Model (LLM) integrated with a Mixture of Experts (MoE) based connector.
We propose an Insertion and Deletion of Interruption Token (IDIT) mechanism for better transfer text generation ability of LLM to speech recognition task.
We also present a connecter with MoE architecture that manages multiple languages efficiently.
arXiv Detail & Related papers (2024-09-24T09:20:22Z) - MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models [57.091523832149655]
We propose a mixture of multimodal experts (MoME) to mitigate task interference and obtain a generalist MLLM.
Our MoME is composed of two key components, a mixture of vision experts (MoVE) and a mixture of language experts (MoLE)
arXiv Detail & Related papers (2024-07-17T16:31:38Z) - Multilingual DistilWhisper: Efficient Distillation of Multi-task Speech
Models via Language-Specific Experts [14.999359332108767]
We propose DistilWhisper to bridge the performance gap in ASR for under-represented languages.
Our approach involves two key strategies: lightweight modular ASR fine-tuning of whisper-small using language-specific experts, and knowledge distillation from whisper-large-v2.
Results demonstrate that our approach is more effective than standard fine-tuning or LoRA adapters.
arXiv Detail & Related papers (2023-11-02T08:37:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.