An Efficient Sparse Inference Software Accelerator for Transformer-based
Language Models on CPUs
- URL: http://arxiv.org/abs/2306.16601v1
- Date: Wed, 28 Jun 2023 23:55:51 GMT
- Title: An Efficient Sparse Inference Software Accelerator for Transformer-based
Language Models on CPUs
- Authors: Haihao Shen, Hengyu Meng, Bo Dong, Zhe Wang, Ofir Zafrir, Yi Ding, Yu
Luo, Hanwen Chang, Qun Gao, Ziheng Wang, Guy Boudoukh, and Moshe Wasserblat
- Abstract summary: Transformer-based language models have become the standard approach for natural language processing tasks.
Most existing neural network inference runtimes lack adequate support for structured sparsity.
We propose an efficient sparse deep learning inference software stack for Transformer-based language models.
- Score: 12.883586189626431
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, Transformer-based language models have become the standard
approach for natural language processing tasks. However, stringent throughput
and latency requirements in industrial applications are limiting their
adoption. To mitigate the gap, model compression techniques such as structured
pruning are being used to improve inference efficiency. However, most existing
neural network inference runtimes lack adequate support for structured
sparsity. In this paper, we propose an efficient sparse deep learning inference
software stack for Transformer-based language models where the weights are
pruned with constant block size. Our sparse software accelerator leverages
Intel Deep Learning Boost to maximize the performance of sparse matrix - dense
matrix multiplication (commonly abbreviated as SpMM) on CPUs. Our SpMM kernel
outperforms the existing sparse libraries (oneMKL, TVM, and LIBXSMM) by an
order of magnitude on a wide range of GEMM shapes under 5 representative
sparsity ratios (70%, 75%, 80%, 85%, 90%). Moreover, our SpMM kernel shows up
to 5x speedup over dense GEMM kernel of oneDNN, a well-optimized dense library
widely used in industry. We apply our sparse accelerator on widely-used
Transformer-based language models including Bert-Mini, DistilBERT, Bert-Base,
and BERT-Large. Our sparse inference software shows up to 1.5x speedup over
Neural Magic's Deepsparse under same configurations on Xeon on Amazon Web
Services under proxy production latency constraints. We also compare our
solution with two framework-based inference solutions, ONNX Runtime and
PyTorch, and demonstrate up to 37x speedup over ONNX Runtime and 345x over
PyTorch on Xeon under the latency constraints. All the source code is publicly
available on Github: https://github.com/intel/intel-extension-for-transformers.
Related papers
- TPI-LLM: Serving 70B-scale LLMs Efficiently on Low-resource Edge Devices [36.714057078457195]
We present TPI-LLM, a compute- and memory-efficient tensor parallel inference system for 70B-scale models.
TPI-LLM keeps sensitive raw data local in the users' devices and introduces a sliding window memory scheduler.
We show that TPI-LLM demonstrated over 80% less time-to-first-token and token latency compared to Accelerate.
arXiv Detail & Related papers (2024-10-01T09:18:56Z) - Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment [56.44025052765861]
Large language models (LLMs) have revolutionized Natural Language Processing (NLP), but their size creates computational bottlenecks.
We introduce a novel approach to create accurate, sparse foundational versions of performant LLMs.
We show a total speedup on CPUs for sparse-quantized LLaMA models of up to 8.6x.
arXiv Detail & Related papers (2024-05-06T16:03:32Z) - Scaling Sparse Fine-Tuning to Large Language Models [67.59697720719672]
Large Language Models (LLMs) are difficult to fully fine-tune due to their sheer number of parameters.
We propose SpIEL, a novel sparse finetuning method which maintains an array of parameter indices and the deltas of these parameters relative to their pretrained values.
We show that SpIEL is superior to popular parameter-efficient fine-tuning methods like LoRA in terms of performance and comparable in terms of run time.
arXiv Detail & Related papers (2024-01-29T18:43:49Z) - FFSplit: Split Feed-Forward Network For Optimizing Accuracy-Efficiency
Trade-off in Language Model Inference [57.119047493787185]
This paper shows how to reduce model size by 43.1% and bring $1.25sim1.56times$ wall clock time speedup on different hardware with negligible accuracy drop.
In practice, our method can reduce model size by 43.1% and bring $1.25sim1.56times$ wall clock time speedup on different hardware with negligible accuracy drop.
arXiv Detail & Related papers (2024-01-08T17:29:16Z) - Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative
Model Inference with Unstructured Sparsity [12.663030430488922]
We propose Flash-LLM for enabling low-cost and highly-efficient large generative model inference on high-performance Cores.
At SpMM kernel level, Flash-LLM significantly outperforms the state-of-the-art library, i.e., Sputnik and SparTA by an average of 2.9x and 1.5x, respectively.
arXiv Detail & Related papers (2023-09-19T03:20:02Z) - FlexGen: High-Throughput Generative Inference of Large Language Models
with a Single GPU [89.2451963569343]
FlexGen is a generation engine for running large language model (LLM) inference on a single commodity GPU.
When running OPT-175B on a single 16GB GPU, FlexGen achieves significantly higher throughput compared to state-of-the-art offloading systems.
On the HELM benchmark, FlexGen can benchmark a 30B model with a 16GB GPU on 7 representative sub-scenarios in 21 hours.
arXiv Detail & Related papers (2023-03-13T05:19:28Z) - Fast DistilBERT on CPUs [13.29188219884869]
Transformer-based language models have become the standard approach to solving natural language processing tasks.
Industry adoption usually requires the maximum throughput to comply with certain latency constraints.
We propose a new pipeline for creating and running Fast Transformer models on CPUs, utilizing hardware-aware pruning, knowledge distillation, quantization, and our own Transformer inference runtime engine with optimized kernels for sparse and quantized operators.
arXiv Detail & Related papers (2022-10-27T07:22:50Z) - NumS: Scalable Array Programming for the Cloud [82.827921577004]
We present NumS, an array programming library which optimize NumPy-like expressions on task-based distributed systems.
This is achieved through a novel scheduler called Load Simulated Hierarchical Scheduling (LSHS)
We show that LSHS enhances performance on Ray by decreasing network load by a factor of 2x, requiring 4x less memory, and reducing execution time by 10x on the logistic regression problem.
arXiv Detail & Related papers (2022-06-28T20:13:40Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - SparseDNN: Fast Sparse Deep Learning Inference on CPUs [1.6244541005112747]
We present SparseDNN, a sparse deep learning inference engine targeting CPUs.
We show that our sparse code generator can achieve significant speedups over state-of-the-art sparse and dense libraries.
arXiv Detail & Related papers (2021-01-20T03:27:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.