TorchSparse++: Efficient Training and Inference Framework for Sparse
Convolution on GPUs
- URL: http://arxiv.org/abs/2311.12862v1
- Date: Wed, 25 Oct 2023 21:02:38 GMT
- Title: TorchSparse++: Efficient Training and Inference Framework for Sparse
Convolution on GPUs
- Authors: Haotian Tang, Shang Yang, Zhijian Liu, Ke Hong, Zhongming Yu, Xiuyu
Li, Guohao Dai, Yu Wang, Song Han
- Abstract summary: Sparse convolution plays a pivotal role in emerging workloads, including point cloud processing in AR/VR, autonomous driving, and graph understanding in recommendation systems.
Existing GPU libraries offer two dataflow types for sparse convolution.
We introduce TorchSparse++, a new GPU library that achieves the best of both worlds.
- Score: 20.4238781638402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sparse convolution plays a pivotal role in emerging workloads, including
point cloud processing in AR/VR, autonomous driving, and graph understanding in
recommendation systems. Since the computation pattern is sparse and irregular,
specialized high-performance kernels are required. Existing GPU libraries offer
two dataflow types for sparse convolution. The gather-GEMM-scatter dataflow is
easy to implement but not optimal in performance, while the dataflows with
overlapped computation and memory access (e.g.implicit GEMM) are highly
performant but have very high engineering costs. In this paper, we introduce
TorchSparse++, a new GPU library that achieves the best of both worlds. We
create a highly efficient Sparse Kernel Generator that generates performant
sparse convolution kernels at less than one-tenth of the engineering cost of
the current state-of-the-art system. On top of this, we design the Sparse
Autotuner, which extends the design space of existing sparse convolution
libraries and searches for the best dataflow configurations for training and
inference workloads. Consequently, TorchSparse++ achieves 2.9x, 3.3x, 2.2x and
1.7x measured end-to-end speedup on an NVIDIA A100 GPU over state-of-the-art
MinkowskiEngine, SpConv 1.2, TorchSparse and SpConv v2 in inference; and is
1.2-1.3x faster than SpConv v2 in mixed precision training across seven
representative autonomous driving benchmarks. It also seamlessly supports graph
convolutions, achieving 2.6-7.6x faster inference speed compared with
state-of-the-art graph deep learning libraries.
Related papers
- High Performance Computing Applied to Logistic Regression: A CPU and GPU
Implementation Comparison [0.0]
We present a versatile GPU-based parallel version of Logistic Regression (LR)
Our implementation is a direct translation of the parallel Gradient Descent Logistic Regression algorithm proposed by X. Zou et al.
Our method is particularly advantageous for real-time prediction applications like image recognition, spam detection, and fraud detection.
arXiv Detail & Related papers (2023-08-19T14:49:37Z) - INR-Arch: A Dataflow Architecture and Compiler for Arbitrary-Order
Gradient Computations in Implicit Neural Representation Processing [66.00729477511219]
Given a function represented as a computation graph, traditional architectures face challenges in efficiently computing its nth-order gradient.
We introduce INR-Arch, a framework that transforms the computation graph of an nth-order gradient into a hardware-optimized dataflow architecture.
We present results that demonstrate 1.8-4.8x and 1.5-3.6x speedup compared to CPU and GPU baselines respectively.
arXiv Detail & Related papers (2023-08-11T04:24:39Z) - PARTIME: Scalable and Parallel Processing Over Time with Deep Neural
Networks [68.96484488899901]
We present PARTIME, a library designed to speed up neural networks whenever data is continuously streamed over time.
PARTIME starts processing each data sample at the time in which it becomes available from the stream.
Experiments are performed in order to empirically compare PARTIME with classic non-parallel neural computations in online learning.
arXiv Detail & Related papers (2022-10-17T14:49:14Z) - TorchSparse: Efficient Point Cloud Inference Engine [24.541195361633523]
We introduce TorchSparse, a high-performance point cloud inference engine.
TorchSparse directly optimize the two bottlenecks of sparse convolution: irregular computation and data movement.
It achieves 1.6x and 1.5x measured end-to-end speedup over the state-of-the-art MinkowskiEngine and SpConv, respectively.
arXiv Detail & Related papers (2022-04-21T17:58:30Z) - PLSSVM: A (multi-)GPGPU-accelerated Least Squares Support Vector Machine [68.8204255655161]
Support Vector Machines (SVMs) are widely used in machine learning.
However, even modern and optimized implementations do not scale well for large non-trivial dense data sets on cutting-edge hardware.
PLSSVM can be used as a drop-in replacement for an LVM.
arXiv Detail & Related papers (2022-02-25T13:24:23Z) - Accelerating Training and Inference of Graph Neural Networks with Fast
Sampling and Pipelining [58.10436813430554]
Mini-batch training of graph neural networks (GNNs) requires a lot of computation and data movement.
We argue in favor of performing mini-batch training with neighborhood sampling in a distributed multi-GPU environment.
We present a sequence of improvements to mitigate these bottlenecks, including a performance-engineered neighborhood sampler.
We also conduct an empirical analysis that supports the use of sampling for inference, showing that test accuracies are not materially compromised.
arXiv Detail & Related papers (2021-10-16T02:41:35Z) - Efficient and Generic 1D Dilated Convolution Layer for Deep Learning [52.899995651639436]
We introduce our efficient implementation of a generic 1D convolution layer covering a wide range of parameters.
It is optimized for x86 CPU architectures, in particular, for architectures containing Intel AVX-512 and AVX-512 BFloat16 instructions.
We demonstrate the performance of our optimized 1D convolution layer by utilizing it in the end-to-end neural network training with real genomics datasets.
arXiv Detail & Related papers (2021-04-16T09:54:30Z) - Sparse GPU Kernels for Deep Learning [24.94153856081836]
Deep learning applications have relatively moderate levels of sparsity that are not sufficient for existing sparse kernels to outperform their dense counterparts.
We develop high-performance GPU kernels for two sparse matrix operations widely applicable in neural networks.
Using our kernels, we demonstrate sparse Transformer and MobileNet models that achieve 1.2-2.1x speedups and up to 12.8x memory savings without sacrificing accuracy.
arXiv Detail & Related papers (2020-06-18T23:59:11Z) - Kernel methods through the roof: handling billions of points efficiently [94.31450736250918]
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems.
Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections.
Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware.
arXiv Detail & Related papers (2020-06-18T08:16:25Z) - Heterogeneous CPU+GPU Stochastic Gradient Descent Algorithms [1.3249453757295084]
We study training algorithms for deep learning on heterogeneous CPU+GPU architectures.
Our two-fold objective -- maximize convergence rate and resource utilization simultaneously -- makes the problem challenging.
We show that the implementation of these algorithms achieves both faster convergence and higher resource utilization than on several real datasets.
arXiv Detail & Related papers (2020-04-19T05:21:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.