PolyDL: Polyhedral Optimizations for Creation of High Performance DL
primitives
- URL: http://arxiv.org/abs/2006.02230v2
- Date: Tue, 17 Nov 2020 15:43:42 GMT
- Title: PolyDL: Polyhedral Optimizations for Creation of High Performance DL
primitives
- Authors: Sanket Tavarageri, Alexander Heinecke, Sasikanth Avancha, Gagandeep
Goyal, Ramakrishna Upadrasta, Bharat Kaul
- Abstract summary: We present compiler algorithms to automatically generate high performance implementations of Deep Learning primitives.
We develop novel data reuse analysis algorithms using the polyhedral model.
We also show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance.
- Score: 55.79741270235602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks (DNNs) have revolutionized many aspects of our lives.
The use of DNNs is becoming ubiquitous including in softwares for image
recognition, speech recognition, speech synthesis, language translation, to
name a few. he training of DNN architectures however is computationally
expensive. Once the model is created, its use in the intended application - the
inference task, is computationally heavy too and the inference needs to be fast
for real time use. For obtaining high performance today, the code of Deep
Learning (DL) primitives optimized for specific architectures by expert
programmers exposed via libraries is the norm. However, given the constant
emergence of new DNN architectures, creating hand optimized code is expensive,
slow and is not scalable.
To address this performance-productivity challenge, in this paper we present
compiler algorithms to automatically generate high performance implementations
of DL primitives that closely match the performance of hand optimized
libraries. We develop novel data reuse analysis algorithms using the polyhedral
model to derive efficient execution schedules automatically. In addition,
because most DL primitives use some variant of matrix multiplication at their
core, we develop a flexible framework where it is possible to plug in library
implementations of the same in lieu of a subset of the loops. We show that such
a hybrid compiler plus a minimal library-use approach results in
state-of-the-art performance. We develop compiler algorithms to also perform
operator fusions that reduce data movement through the memory hierarchy of the
computer system.
Related papers
- Spyx: A Library for Just-In-Time Compiled Optimization of Spiking Neural
Networks [0.08965418284317034]
Spiking Neural Networks (SNNs) offer to enhance energy efficiency through a reduced and low-power hardware footprint.
This paper introduces Spyx, a new and lightweight SNN simulation and optimization library designed in JAX.
arXiv Detail & Related papers (2024-02-29T09:46:44Z) - Use Your INSTINCT: INSTruction optimization for LLMs usIng Neural bandits Coupled with Transformers [66.823588073584]
Large language models (LLMs) have shown remarkable instruction-following capabilities and achieved impressive performances in various applications.
Recent work has used the query-efficient Bayesian optimization (BO) algorithm to automatically optimize the instructions given to black-box LLMs.
We propose a neural bandit algorithm which replaces the GP in BO by an NN surrogate to optimize instructions for black-box LLMs.
arXiv Detail & Related papers (2023-10-02T02:01:16Z) - Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures [67.47328776279204]
This work introduces a framework to develop efficient, portable Deep Learning and High Performance Computing kernels.
We decompose the kernel development in two steps: 1) Expressing the computational core using Processing Primitives (TPPs) and 2) Expressing the logical loops around TPPs in a high-level, declarative fashion.
We demonstrate the efficacy of our approach using standalone kernels and end-to-end workloads that outperform state-of-the-art implementations on diverse CPU platforms.
arXiv Detail & Related papers (2023-04-25T05:04:44Z) - oneDNN Graph Compiler: A Hybrid Approach for High-Performance Deep
Learning Compilation [8.64220475114214]
oneDNN Graph Compiler employs a hybrid approach of using techniques from both compiler optimization and expert-tuned kernels for high performance code generation.
Experimental results demonstrate significant performance gains over existing tensor compiler and primitives library for performance-critical computation graphs.
arXiv Detail & Related papers (2023-01-03T19:52:17Z) - Boosting Neural Networks to Decompile Optimized Binaries [13.255618541522436]
Decompilation aims to transform a low-level program language (LPL) into its functionally-equivalent high-level program language (HPL)
We propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries.
arXiv Detail & Related papers (2023-01-03T06:45:54Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Benchmark Assessment for DeepSpeed Optimization Library [1.7839986996686321]
Deep Learning (DL) models are widely used in machine learning due to their performance and ability to deal with large datasets.
The size of such datasets and the complexity of DL models cause such models to be complex, consuming large amount of resources and time to train.
Many recent libraries and applications are introduced to deal with DL complexity and efficiency issues.
arXiv Detail & Related papers (2022-02-12T04:52:28Z) - PolyScientist: Automatic Loop Transformations Combined with Microkernels
for Optimization of Deep Learning Primitives [55.79741270235602]
We develop a hybrid solution to the development of deep learning kernels.
We use the advanced polyhedral technology to automatically tune the outer loops for performance.
arXiv Detail & Related papers (2020-02-06T08:02:34Z) - PatDNN: Achieving Real-Time DNN Execution on Mobile Devices with
Pattern-based Weight Pruning [57.20262984116752]
We introduce a new dimension, fine-grained pruning patterns inside the coarse-grained structures, revealing a previously unknown point in design space.
With the higher accuracy enabled by fine-grained pruning patterns, the unique insight is to use the compiler to re-gain and guarantee high hardware efficiency.
arXiv Detail & Related papers (2020-01-01T04:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.