MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
- URL: http://arxiv.org/abs/2211.15841v1
- Date: Tue, 29 Nov 2022 00:27:08 GMT
- Title: MegaBlocks: Efficient Sparse Training with Mixture-of-Experts
- Authors: Trevor Gale, Deepak Narayanan, Cliff Young, Matei Zaharia
- Abstract summary: MegaBlocks is a system for efficient Mixture-of-Experts (MoE) training on GPUs.
We reformulate MoE in terms of block-sparse operations and develop new block-sparse GPU kernels.
Our approach never drops tokens and maps efficiently to modern hardware, enabling end-to-end training speedups of up to 40% over MoEs.
- Score: 19.541303844245835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present MegaBlocks, a system for efficient Mixture-of-Experts (MoE)
training on GPUs. Our system is motivated by the limitations of current
frameworks, which restrict the dynamic routing in MoE layers to satisfy the
constraints of existing software and hardware. These formulations force a
tradeoff between model quality and hardware efficiency, as users must choose
between dropping tokens from the computation or wasting computation and memory
on padding. To address these limitations, we reformulate MoE computation in
terms of block-sparse operations and develop new block-sparse GPU kernels that
efficiently handle the dynamism present in MoEs. Our approach never drops
tokens and maps efficiently to modern hardware, enabling end-to-end training
speedups of up to 40% over MoEs trained with the state-of-the-art Tutel library
and 2.4x over DNNs trained with the highly-optimized Megatron-LM framework.
Related papers
- Dynamic Tsetlin Machine Accelerators for On-Chip Training at the Edge using FPGAs [0.3440236962613469]
This paper presents a Dynamic Tsetlin Machine (DTM) training accelerator as an alternative to Deep Neural Networks (DNNs)
DTM trains with fewer multiply-accumulates, devoid of derivative computation.
The proposed accelerator offers 2.54x more Giga operations per second per Watt (GOP/s per W) and uses 6x less power than the next-best comparable design.
arXiv Detail & Related papers (2025-04-28T13:38:53Z) - Accelerating MoE Model Inference with Expert Sharding [1.4733737463429546]
Mixture of experts (MoE) models achieve state-of-the-art results in language modeling but suffer from inefficient hardware utilization due to imbalanced token routing and communication overhead.
We introduce MoEShard, an inference system that achieves perfect load balancing through tensor sharding of MoE experts.
arXiv Detail & Related papers (2025-03-11T14:15:01Z) - AutoHete: An Automatic and Efficient Heterogeneous Training System for LLMs [68.99086112477565]
Transformer-based large language models (LLMs) have demonstrated exceptional capabilities in sequence modeling and text generation.
Existing heterogeneous training methods significantly expand the scale of trainable models but introduce substantial communication overheads and CPU workloads.
We propose AutoHete, an automatic and efficient heterogeneous training system compatible with both single- GPU and multi- GPU environments.
arXiv Detail & Related papers (2025-02-27T14:46:22Z) - Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design [59.00758127310582]
We propose a novel framework Read-ME that transforms pre-trained dense LLMs into smaller MoE models.
Our approach employs activation sparsity to extract experts.
Read-ME outperforms other popular open-source dense models of similar scales.
arXiv Detail & Related papers (2024-10-24T19:48:51Z) - EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference [49.94169109038806]
This paper introduces EPS-MoE, a novel expert pipeline scheduler for MoE.
Our results demonstrate an average 21% improvement in prefill throughput over existing parallel inference methods.
arXiv Detail & Related papers (2024-10-16T05:17:49Z) - Realizing Unaligned Block-wise Pruning for DNN Acceleration on Mobile Devices [1.6114012813668932]
Block-wise pruning is promising due to its low accuracy drop tradeoff for speedup gains.
Unaligned block pruning (UBP) addresses this by allowing blocks to be selected at arbitrary positions.
We propose a pseudo-optimal yet fast block selection algorithm called Block Expansion and Division.
arXiv Detail & Related papers (2024-07-29T01:59:06Z) - Weight Block Sparsity: Training, Compilation, and AI Engine Accelerators [0.0]
Deep Neural Networks (DNNs) are being developed, trained, and utilized, putting a strain on both advanced and limited devices.
Our solution is to implement em weight block sparsity, which is a structured sparsity that is friendly to hardware.
We will present performance estimates using accurate and complete code generation for AIE2 configuration sets (AMD Versal FPGAs) with Resnet50, Inception V3, and VGG16.
arXiv Detail & Related papers (2024-07-12T17:37:49Z) - Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference [23.207326766883405]
Mixture-of-Experts (MoE) is able to scale its model size without proportionally scaling up its computational requirements.
Pre-gated MoE employs our novel pre-gating function which alleviates the dynamic nature of sparse expert activation.
We demonstrate that Pre-gated MoE is able to improve performance, reduce GPU memory consumption, while also maintaining the same level of model quality.
arXiv Detail & Related papers (2023-08-23T11:25:37Z) - In Situ Framework for Coupling Simulation and Machine Learning with
Application to CFD [51.04126395480625]
Recent years have seen many successful applications of machine learning (ML) to facilitate fluid dynamic computations.
As simulations grow, generating new training datasets for traditional offline learning creates I/O and storage bottlenecks.
This work offers a solution by simplifying this coupling and enabling in situ training and inference on heterogeneous clusters.
arXiv Detail & Related papers (2023-06-22T14:07:54Z) - Bulk-Switching Memristor-based Compute-In-Memory Module for Deep Neural
Network Training [15.660697326769686]
We propose a mixed-precision training scheme for memristor-based compute-in-memory (CIM) modules.
The proposed scheme is implemented with a system-on-chip (SoC) of fully integrated analog CIM modules and digital sub-systems.
The efficacy of training larger models is evaluated using realistic hardware parameters and shows that analog CIM modules can enable efficient mix-precision training with accuracy comparable to full-precision software trained models.
arXiv Detail & Related papers (2023-05-23T22:03:08Z) - AutoMoE: Heterogeneous Mixture-of-Experts with Adaptive Computation for
Efficient Neural Machine Translation [104.0979785739202]
Mixture-of-Expert (MoE) models have obtained state-of-the-art performance in Neural Machine Translation (NMT) tasks.
Existing MoE models mostly consider a homogeneous design where the same number of experts of the same size are placed uniformly throughout the network.
We develop AutoMoE -- a framework for designing heterogeneous MoE's under computational constraints.
arXiv Detail & Related papers (2022-10-14T05:32:17Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Towards Efficient Post-training Quantization of Pre-trained Language
Models [85.68317334241287]
We study post-training quantization(PTQ) of PLMs, and propose module-wise quantization error minimization(MREM), an efficient solution to mitigate these issues.
Experiments on GLUE and SQuAD benchmarks show that our proposed PTQ solution not only performs close to QAT, but also enjoys significant reductions in training time, memory overhead, and data consumption.
arXiv Detail & Related papers (2021-09-30T12:50:06Z) - FastMoE: A Fast Mixture-of-Expert Training System [20.74001755688784]
Mixture-of-Expert (MoE) presents a strong potential in enlarging the size of language model to trillions of parameters.
FastMoE is a distributed MoE training system based on PyTorch with common accelerators.
arXiv Detail & Related papers (2021-03-24T15:27:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.