Exploiting Inter-Layer Expert Affinity for Accelerating
Mixture-of-Experts Model Inference
- URL: http://arxiv.org/abs/2401.08383v2
- Date: Wed, 17 Jan 2024 03:37:00 GMT
- Title: Exploiting Inter-Layer Expert Affinity for Accelerating
Mixture-of-Experts Model Inference
- Authors: Jinghan Yao, Quentin Anthony, Aamir Shafi, Hari Subramoni, Dhabaleswar
K. (DK) Panda
- Abstract summary: We propose a lightweight optimization technique called ExFlow to largely accelerate the inference of pre-trained MoE models.
By exploiting the inter-layer expert affinity, our solution can be directly applied to pre-trained MoE models without any fine-tuning or accuracy degradation.
Our solution beats the cutting-edge MoE implementations with experts from 8 to 64, with up to 2.2x improvement in inference throughput.
- Score: 3.217776693788795
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In large language models like the Generative Pre-trained Transformer, the
Mixture of Experts paradigm has emerged as a powerful technique for enhancing
model expressiveness and accuracy. However, deploying GPT MoE models for
parallel inference on distributed systems presents significant challenges,
primarily due to the extensive Alltoall communication required for expert
routing and aggregation. This communication bottleneck exacerbates the already
complex computational landscape, hindering the efficient utilization of
high-performance computing resources. In this paper, we propose a lightweight
optimization technique called ExFlow, to largely accelerate the inference of
these MoE models. We take a new perspective on alleviating the communication
overhead by exploiting the inter-layer expert affinity. Unlike previous
methods, our solution can be directly applied to pre-trained MoE models without
any fine-tuning or accuracy degradation. By proposing a context-coherent expert
parallelism on distributed systems, our design only uses one Alltoall
communication to deliver the same functionality while previous methods all
require two Alltoalls. By carefully examining the conditional probability in
tokens' routing across multiple layers, we proved that pre-trained GPT MoE
models implicitly exhibit a strong inter-layer expert affinity. We then design
an efficient integer programming model to capture such features and show that
by properly placing the experts on corresponding GPUs, we can reduce up to 67%
cross-GPU routing latency. Our solution beats the cutting-edge MoE
implementations with experts from 8 to 64, with up to 2.2x improvement in
inference throughput. We further provide a detailed study of how the model
implicitly acquires this expert affinity at the very early training stage and
how this affinity evolves and stabilizes during training.
Related papers
- EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference [49.94169109038806]
This paper introduces EPS-MoE, a novel expert pipeline scheduler for MoE.
Our results demonstrate an average 21% improvement in prefill throughput over existing parallel inference methods.
arXiv Detail & Related papers (2024-10-16T05:17:49Z) - Parm: Efficient Training of Large Sparsely-Activated Models with Dedicated Schedules [15.680276212483292]
We propose Parm, a system that accelerates MP+EP+ESP training by designing two dedicated schedules for placing communication tasks.
Parm achieves 1.13$times$ to 5.77$times$ speedup on 1296 manually configured MoE layers and approximately 3$times$ improvement on two real-world MoE models.
arXiv Detail & Related papers (2024-06-30T05:55:11Z) - Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion [53.33473557562837]
Solving multi-objective optimization problems for large deep neural networks is a challenging task due to the complexity of the loss landscape and the expensive computational cost.
We propose a practical and scalable approach to solve this problem via mixture of experts (MoE) based model fusion.
By ensembling the weights of specialized single-task models, the MoE module can effectively capture the trade-offs between multiple objectives.
arXiv Detail & Related papers (2024-06-14T07:16:18Z) - A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts [49.394145046409044]
This paper provides the first provably efficient technique for pruning experts in finetuned MoE models.
We theoretically prove that prioritizing the pruning of the experts with a smaller change of the routers l2 norm from the pretrained model guarantees the preservation of test accuracy.
Although our theoretical analysis is centered on binary classification tasks on simplified MoE architecture, our expert pruning method is verified on large vision MoE models.
arXiv Detail & Related papers (2024-05-26T17:52:58Z) - Lancet: Accelerating Mixture-of-Experts Training via Whole Graph Computation-Communication Overlapping [14.435637320909663]
MoE technique plays crucial role in expanding the size of DNN model parameters.
Existing methods attempt to mitigate this issue by overlapping all-to-all with expert computation.
In our study, we extend the scope of this challenge by considering overlap at the broader training graph level.
We implement these techniques in Lancet, a system using compiler-based optimization to automatically enhance MoE model training.
arXiv Detail & Related papers (2024-04-30T10:17:21Z) - Shortcut-connected Expert Parallelism for Accelerating Mixture-of-Experts [4.629608387540524]
We present a novel shortcut-connected MoE (ScMoE) architecture with an overlapping parallel strategy.
ScMoE allows for a substantial overlap of 70% to 100% with computation.
Building on the ScMoE architecture, we further implement an expert offloading strategy to facilitate memory-limited inference.
arXiv Detail & Related papers (2024-04-07T17:17:23Z) - ATOM: Asynchronous Training of Massive Models for Deep Learning in a Decentralized Environment [7.916080032572087]
atom is a resilient distributed training framework designed for asynchronous training of vast models in a decentralized setting.
atom aims to accommodate a complete LLM on one host (peer) through seamlessly model swapping and concurrently trains multiple copies across various peers to optimize training throughput.
Our experiments using different GPT-3 model configurations reveal that, in scenarios with suboptimal network connections, atom can enhance training efficiency up to $20 times$ when juxtaposed with the state-of-the-art decentralized pipeline parallelism approaches.
arXiv Detail & Related papers (2024-03-15T17:43:43Z) - LocMoE: A Low-Overhead MoE for Large Language Model Training [13.153904674287546]
We propose a novel routing strategy that combines load balance and locality by converting partial inter-node communication to that of intra-node.
The proposed LocMoE reduces training time per epoch by 12.68% to 22.24% compared to classical routers.
arXiv Detail & Related papers (2024-01-25T03:36:39Z) - When Parameter-efficient Tuning Meets General-purpose Vision-language
Models [65.19127815275307]
PETAL revolutionizes the training process by requiring only 0.5% of the total parameters, achieved through a unique mode approximation technique.
Our experiments reveal that PETAL not only outperforms current state-of-the-art methods in most scenarios but also surpasses full fine-tuning models in effectiveness.
arXiv Detail & Related papers (2023-12-16T17:13:08Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.