EnergonAI: An Inference System for 10-100 Billion Parameter Transformer
Models
- URL: http://arxiv.org/abs/2209.02341v1
- Date: Tue, 6 Sep 2022 10:02:58 GMT
- Title: EnergonAI: An Inference System for 10-100 Billion Parameter Transformer
Models
- Authors: Jiangsu Du and Ziming Liu and Jiarui Fang and Shenggui Li and Yongbin
Li and Yutong Lu and Yang You
- Abstract summary: We propose EnergonAI to solve the challenges of the efficient deployment of 10-100 billion parameter transformer models.
EgonAI adopts a hierarchy-controller system architecture to coordinate multiple devices and efficiently support different parallel patterns.
Compared with the FasterTransformer, we have proven that EnergonAI has superior performance on latency and throughput.
- Score: 17.62360528651639
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large transformer models display promising performance on a wide range of
natural language processing (NLP) tasks. Although the AI community has expanded
the model scale to the trillion parameter level, the practical deployment of
10-100 billion parameter models is still uncertain due to the latency,
throughput, and memory constraints.
In this paper, we proposed EnergonAI to solve the challenges of the efficient
deployment of 10-100 billion parameter transformer models on single- or
multi-GPU systems. EnergonAI adopts a hierarchy-controller system architecture
to coordinate multiple devices and efficiently support different parallel
patterns. It delegates the execution of sub-models to multiple workers in the
single-controller style and applies tensor parallelism and pipeline parallelism
among the workers in a multi-controller style. Upon the novel architecture, we
propose three techniques, i.e. non-blocking pipeline parallelism, distributed
redundant computation elimination, and peer memory pooling. EnergonAI enables
the users to program complex parallel code the same as a serial one. Compared
with the FasterTransformer, we have proven that EnergonAI has superior
performance on latency and throughput. In our experiments, EnergonAI can
achieve 37% latency reduction in tensor parallelism, 10% scalability
improvement in pipeline parallelism, and it improves the model scale inferred
on a single GPU by using a larger heterogeneous memory space at cost of limited
performance reduction.
Related papers
- Kraken: Inherently Parallel Transformers For Efficient Multi-Device Inference [8.527031391688283]
Kraken is an evolution of the standard Transformer architecture for efficient inference on multi-device systems.
When trained on OpenWebText, Kraken models reach a similar perplexity as standard Transformers.
When tested on the SuperGLUE benchmark, Kraken speeds up Time To First Token by a mean of 35.6% across a range of model sizes.
arXiv Detail & Related papers (2024-08-14T20:24:03Z) - Harnessing Manycore Processors with Distributed Memory for Accelerated
Training of Sparse and Recurrent Models [43.1773057439246]
Current AI training infrastructure is dominated by single instruction multiple data (SIMD) and systolic array architectures.
We explore sparse and recurrent model training on a massively parallel multiple instruction multiple data architecture with distributed local memory.
arXiv Detail & Related papers (2023-11-07T23:18:35Z) - RWKV: Reinventing RNNs for the Transformer Era [54.716108899349614]
We propose a novel model architecture that combines the efficient parallelizable training of transformers with the efficient inference of RNNs.
We scale our models as large as 14 billion parameters, by far the largest dense RNN ever trained, and find RWKV performs on par with similarly sized Transformers.
arXiv Detail & Related papers (2023-05-22T13:57:41Z) - Parameter-efficient Tuning of Large-scale Multimodal Foundation Model [68.24510810095802]
We propose A graceful prompt framework for cross-modal transfer (Aurora) to overcome these challenges.
Considering the redundancy in existing architectures, we first utilize the mode approximation to generate 0.1M trainable parameters to implement the multimodal prompt tuning.
A thorough evaluation on six cross-modal benchmarks shows that it not only outperforms the state-of-the-art but even outperforms the full fine-tuning approach.
arXiv Detail & Related papers (2023-05-15T06:40:56Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - Paraformer: Fast and Accurate Parallel Transformer for
Non-autoregressive End-to-End Speech Recognition [62.83832841523525]
We propose a fast and accurate parallel transformer, termed Paraformer.
It accurately predicts the number of output tokens and extract hidden variables.
It can attain comparable performance to the state-of-the-art AR transformer, with more than 10x speedup.
arXiv Detail & Related papers (2022-06-16T17:24:14Z) - Dynamic Convolution for 3D Point Cloud Instance Segmentation [146.7971476424351]
We propose an approach to instance segmentation from 3D point clouds based on dynamic convolution.
We gather homogeneous points that have identical semantic categories and close votes for the geometric centroids.
The proposed approach is proposal-free, and instead exploits a convolution process that adapts to the spatial and semantic characteristics of each instance.
arXiv Detail & Related papers (2021-07-18T09:05:16Z) - Easy and Efficient Transformer : Scalable Inference Solution For large
NLP mode [14.321889138798072]
This paper introduces a series of ultra-large-scale pre-training model optimization methods.
An inference engine -- Easy and Efficient Transformer (EET) is proposed.
EET achieves a 1.5-15x state-of-art speedup varying with context length.
arXiv Detail & Related papers (2021-04-26T11:00:56Z) - Efficient Large-Scale Language Model Training on GPU Clusters [19.00915720435389]
Large language models have led to state-of-the-art accuracies across a range of tasks.
Memory capacity is limited, making it impossible to fit large models on a single GPU.
The number of compute operations required to train these models can result in unrealistically long training times.
arXiv Detail & Related papers (2021-04-09T16:43:11Z) - TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale
Language Models [60.23234205219347]
TeraPipe is a high-performance token-level pipeline parallel algorithm for synchronous model-parallel training of Transformer-based language models.
We show that TeraPipe can speed up the training by 5.0x for the largest GPT-3 model with 175 billion parameters on an AWS cluster.
arXiv Detail & Related papers (2021-02-16T07:34:32Z) - TurboTransformers: An Efficient GPU Serving System For Transformer
Models [17.4637724940437]
The TurboTransformers system consists of a computing runtime and a serving framework.
An efficient parallel algorithm is proposed for GPU-based batch reduction operations.
A memory allocation algorithm is designed for variable-length input situations.
A serving framework equipped with a new batch scheduler achieves the optimal throughput on variable-length requests.
arXiv Detail & Related papers (2020-10-09T07:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.