Distributed Hybrid Parallelism for Large Language Models: Comparative Study and System Design Guide
- URL: http://arxiv.org/abs/2602.09109v1
- Date: Mon, 09 Feb 2026 19:01:13 GMT
- Title: Distributed Hybrid Parallelism for Large Language Models: Comparative Study and System Design Guide
- Authors: Hossam Amer, Rezaul Karim, Ali Pourranjbar, Weiwei Zhang, Walid Ahmed, Boxing Chen,
- Abstract summary: This paper offers a comprehensive review of collective operations and distributed parallel strategies.<n>We examine hybrid parallelization designs, emphasizing communication overlap across different stages of model deployment.<n>We highlight open challenges and limitations of current LLM training paradigms and outline promising directions for the next generation of large scale model development.
- Score: 15.92814573525633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid growth of large language models (LLMs), a wide range of methods have been developed to distribute computation and memory across hardware devices for efficient training and inference. While existing surveys provide descriptive overviews of these techniques, systematic analysis of their benefits and trade offs and how such insights can inform principled methodology for designing optimal distributed systems remain limited. This paper offers a comprehensive review of collective operations and distributed parallel strategies, complemented by mathematical formulations to deepen theoretical understanding. We further examine hybrid parallelization designs, emphasizing communication computation overlap across different stages of model deployment, including both training and inference. Recent advances in automated search for optimal hybrid parallelization strategies using cost models are also discussed. Moreover, we present case studies with mainstream architecture categories to reveal empirical insights to guide researchers and practitioners in parallelism strategy selection. Finally, we highlight open challenges and limitations of current LLM training paradigms and outline promising directions for the next generation of large scale model development.
Related papers
- Deep Unfolding: Recent Developments, Theory, and Design Guidelines [99.63555420898554]
This article provides a tutorial-style overview of deep unfolding, a framework that transforms optimization algorithms into structured, trainable ML architectures.<n>We review the foundations of optimization for inference and for learning, introduce four representative design paradigms for deep unfolding, and discuss the distinctive training schemes that arise from their iterative nature.
arXiv Detail & Related papers (2025-12-03T13:16:35Z) - Hybrid Architectures for Language Models: Systematic Analysis and Design Insights [17.46576657832284]
Large language modelscombining self-attention mechanisms with structured state space models like Mamba can achieve a compelling balance between modeling quality and computational efficiency.<n>We present a holistic evaluation of hybrid architectures based on inter-layer (sequential) or intra-layer (parallel) fusion.
arXiv Detail & Related papers (2025-10-06T13:30:07Z) - Speed Always Wins: A Survey on Efficient Architectures for Large Language Models [51.817121227562964]
Large Language Models (LLMs) have delivered impressive results in language understanding, generation, reasoning, and pushes the ability boundary of multimodal models.<n> Transformer models, as the foundation of modern LLMs, offer a strong baseline with excellent scaling properties.<n>The traditional transformer architecture requires substantial computations and poses significant obstacles for large-scale training and practical deployment.
arXiv Detail & Related papers (2025-08-13T14:13:46Z) - Research on Model Parallelism and Data Parallelism Optimization Methods in Large Language Model-Based Recommendation Systems [6.453224262551299]
Large language models (LLMs) in recommendation systems have become increasingly prominent.<n>This paper systematically investigates two classes of optimization methods-model parallelism and data parallelism.<n> Experiments conducted on a real-world recommendation dataset in a simulated service environment demonstrate that our proposed hybrid parallelism scheme increases training throughput by over 30%.
arXiv Detail & Related papers (2025-06-21T02:37:25Z) - Speculative Decoding and Beyond: An In-Depth Survey of Techniques [9.362838127921458]
Sequential dependencies present a fundamental bottleneck in deploying large-scale autoregressive models.<n>Recent advances in generation-refinement frameworks demonstrate that this trade-off can be significantly mitigated.
arXiv Detail & Related papers (2025-02-27T03:53:45Z) - Agent-Centric Projection of Prompting Techniques and Implications for Synthetic Training Data for Large Language Models [0.8879149917735942]
This paper introduces and explains the concepts of linear contexts (a single, continuous sequence of interactions) and non-linear contexts (branching or multi-path) in Large Language Models (LLMs)<n>These concepts enable the development of an agent-centric projection of prompting techniques, a framework that can reveal deep connections between prompting strategies and multi-agent systems.
arXiv Detail & Related papers (2025-01-14T03:26:43Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.<n> deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.<n>This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities [89.40778301238642]
Model merging is an efficient empowerment technique in the machine learning community.
There is a significant gap in the literature regarding a systematic and thorough review of these techniques.
arXiv Detail & Related papers (2024-08-14T16:58:48Z) - A Survey of Distributed Learning in Cloud, Mobile, and Edge Settings [1.0589208420411014]
This survey explores the landscape of distributed learning, encompassing cloud and edge settings.
We delve into the core concepts of data and model parallelism, examining how models are partitioned across different dimensions and layers to optimize resource utilization and performance.
We analyze various partitioning schemes for different layer types, including fully connected, convolutional, and recurrent layers, highlighting the trade-offs between computational efficiency, communication overhead, and memory constraints.
arXiv Detail & Related papers (2024-05-23T22:00:38Z) - Understanding LLMs: A Comprehensive Overview from Training to Inference [52.70748499554532]
Low-cost training and deployment of large language models represent the future development trend.
Discussion on training includes various aspects, including data preprocessing, training architecture, pre-training tasks, parallel training, and relevant content related to model fine-tuning.
On the inference side, the paper covers topics such as model compression, parallel computation, memory scheduling, and structural optimization.
arXiv Detail & Related papers (2024-01-04T02:43:57Z) - Embedding in Recommender Systems: A Survey [54.55152033023537]
This survey presents a comprehensive analysis of advances in recommender system embedding techniques.<n>In matrix-based scenarios, collaborative filtering generates embeddings that effectively model user-item preferences.<n>We introduce emerging approaches, including AutoML, hashing techniques, and quantization methods, to enhance performance.
arXiv Detail & Related papers (2023-10-28T06:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.