PopSparse: Accelerated block sparse matrix multiplication on IPU
- URL: http://arxiv.org/abs/2303.16999v2
- Date: Wed, 5 Apr 2023 13:43:15 GMT
- Title: PopSparse: Accelerated block sparse matrix multiplication on IPU
- Authors: Zhiyi Li, Douglas Orr, Valeriu Ohan, Godfrey Da costa, Tom Murray,
Adam Sanders, Deniz Beker, Dominic Masters
- Abstract summary: We introduce PopSparse, a library that enables fast sparse operations on Graphcore IPUs.
We target two different types of sparsity: static, where the sparsity pattern is fixed at compile-time; and dynamic, where it can change each time the model is run.
Results indicate that the PopSparse implementations are faster than dense matrix multiplications on IPU at a range of sparsity levels.
- Score: 0.5661403709207713
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Reducing the computational cost of running large scale neural networks using
sparsity has attracted great attention in the deep learning community. While
much success has been achieved in reducing FLOP and parameter counts while
maintaining acceptable task performance, achieving actual speed improvements
has typically been much more difficult, particularly on general purpose
accelerators (GPAs) such as NVIDIA GPUs using low precision number formats. In
this work we introduce PopSparse, a library that enables fast sparse operations
on Graphcore IPUs by leveraging both the unique hardware characteristics of
IPUs as well as any block structure defined in the data. We target two
different types of sparsity: static, where the sparsity pattern is fixed at
compile-time; and dynamic, where it can change each time the model is run. We
present benchmark results for matrix multiplication for both of these modes on
IPU with a range of block sizes, matrix sizes and densities. Results indicate
that the PopSparse implementations are faster than dense matrix multiplications
on IPU at a range of sparsity levels with large matrix size and block size.
Furthermore, static sparsity in general outperforms dynamic sparsity. While
previous work on GPAs has shown speedups only for very high sparsity (typically
99\% and above), the present work demonstrates that our static sparse
implementation outperforms equivalent dense calculations in FP16 at lower
sparsity (around 90%). IPU code is available to view and run at
ipu.dev/sparsity-benchmarks, GPU code will be made available shortly.
Related papers
- Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment [56.44025052765861]
Large language models (LLMs) have revolutionized Natural Language Processing (NLP), but their size creates computational bottlenecks.
We introduce a novel approach to create accurate, sparse foundational versions of performant LLMs.
We show a total speedup on CPUs for sparse-quantized LLaMA models of up to 8.6x.
arXiv Detail & Related papers (2024-05-06T16:03:32Z) - Masked Matrix Multiplication for Emergent Sparsity [1.4786952412297807]
Transformer models exhibit emergent sparsity in which computations perform selective sparse access to dense data.
We build a vectorized and parallel matrix-multiplication system A X B = C that eliminates unnecessary computations.
arXiv Detail & Related papers (2024-02-21T20:36:08Z) - PIT: Optimization of Dynamic Sparse Deep Learning Models via Permutation
Invariant Transformation [15.860204740425791]
We propose Permutation Invariant Transformation (PIT) for dynamic sparsity computation.
PIT transforms micro-tiles into a GPU-efficient dense tile without changing the results.
It can accelerate dynamic sparsity computation by up to 5.9x (average 2.43x) over state-of-the-art compilers.
arXiv Detail & Related papers (2023-01-26T04:50:14Z) - RSC: Accelerating Graph Neural Networks Training via Randomized Sparse
Computations [56.59168541623729]
Training graph neural networks (GNNs) is time consuming because sparse graph-based operations are hard to be accelerated by hardware.
We explore trading off the computational precision to reduce the time complexity via sampling-based approximation.
We propose Randomized Sparse Computation, which for the first time demonstrate the potential of training GNNs with approximated operations.
arXiv Detail & Related papers (2022-10-19T17:25:33Z) - Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and
Algorithm Co-design [66.39546326221176]
Attention-based neural networks have become pervasive in many AI tasks.
The use of the attention mechanism and feed-forward network (FFN) demands excessive computational and memory resources.
This paper proposes a hardware-friendly variant that adopts a unified butterfly sparsity pattern to approximate both the attention mechanism and the FFNs.
arXiv Detail & Related papers (2022-09-20T09:28:26Z) - VersaGNN: a Versatile accelerator for Graph neural networks [81.1667080640009]
We propose textitVersaGNN, an ultra-efficient, systolic-array-based versatile hardware accelerator.
textitVersaGNN achieves on average 3712$times$ speedup with 1301.25$times$ energy reduction on CPU, and 35.4$times$ speedup with 17.66$times$ energy reduction on GPU.
arXiv Detail & Related papers (2021-05-04T04:10:48Z) - Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch [75.69506249886622]
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments.
In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network.
arXiv Detail & Related papers (2021-02-08T05:55:47Z) - Sparse Systolic Tensor Array for Efficient CNN Hardware Acceleration [14.958793135751149]
Convolutional neural network (CNN) inference on mobile devices demands efficient hardware acceleration of low-precision (INT8) general matrix multiplication (GEMM)
Exploiting data sparsity is a common approach to further accelerate GEMM for CNN inference, and in particular, structural sparsity has the advantages of predictable load balancing and very low index overhead.
We address a key architectural challenge with structural sparsity: how to provide support for a range of sparsity levels while maintaining high utilization of the hardware.
arXiv Detail & Related papers (2020-09-04T20:17:42Z) - Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise
Sparsity [12.643043455369297]
We propose an algorithm-software co-designed pruning method that achieves latency speedups on existing dense architectures.
We implement and evaluate the sparsity pattern on GPU tensor core, achieving a 1.95x speedup over the dense model.
arXiv Detail & Related papers (2020-08-29T16:27:41Z) - Kernel methods through the roof: handling billions of points efficiently [94.31450736250918]
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems.
Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections.
Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware.
arXiv Detail & Related papers (2020-06-18T08:16:25Z) - Heterogeneous CPU+GPU Stochastic Gradient Descent Algorithms [1.3249453757295084]
We study training algorithms for deep learning on heterogeneous CPU+GPU architectures.
Our two-fold objective -- maximize convergence rate and resource utilization simultaneously -- makes the problem challenging.
We show that the implementation of these algorithms achieves both faster convergence and higher resource utilization than on several real datasets.
arXiv Detail & Related papers (2020-04-19T05:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.