UNIT: Unifying Tensorized Instruction Compilation
- URL: http://arxiv.org/abs/2101.08458v1
- Date: Thu, 21 Jan 2021 06:22:58 GMT
- Title: UNIT: Unifying Tensorized Instruction Compilation
- Authors: Jian Weng, Animesh Jain, Jie Wang, Leyuan Wang, Yida Wang, and Tony
Nowatzki
- Abstract summary: Hardware vendors offer tensorized instructions for mixed-precision operations, like Intel VNNI, Core, and ARM-DOT.
The lack of compilation techniques for this makes it hard to utilize these instructions.
We develop a compiler framework to unify the compilation for these instructions.
- Score: 11.193044425743981
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Because of the increasing demand for computation in DNN, researchers develope
both hardware and software mechanisms to reduce the compute and memory burden.
A widely adopted approach is to use mixed precision data types. However, it is
hard to leverage mixed precision without hardware support because of the
overhead of data casting. Hardware vendors offer tensorized instructions for
mixed-precision tensor operations, like Intel VNNI, Tensor Core, and ARM-DOT.
These instructions involve a computing idiom that reduces multiple low
precision elements into one high precision element. The lack of compilation
techniques for this makes it hard to utilize these instructions: Using
vendor-provided libraries for computationally-intensive kernels is inflexible
and prevents further optimizations, and manually writing hardware intrinsics is
error-prone and difficult for programmers. Some prior works address this
problem by creating compilers for each instruction. This requires excessive
effort when it comes to many tensorized instructions. In this work, we develop
a compiler framework to unify the compilation for these instructions -- a
unified semantics abstraction eases the integration of new instructions, and
reuses the analysis and transformations. Tensorized instructions from different
platforms can be compiled via UNIT with moderate effort for favorable
performance. Given a tensorized instruction and a tensor operation, UNIT
automatically detects the applicability, transforms the loop organization of
the operation,and rewrites the loop body to leverage the tensorized
instruction. According to our evaluation, UNIT can target various mainstream
hardware platforms. The generated end-to-end inference model achieves 1.3x
speedup over Intel oneDNN on an x86 CPU, 1.75x speedup over Nvidia cuDNN on an
NvidiaGPU, and 1.13x speedup over a carefully tuned TVM solution for ARM DOT on
an ARM CPU.
Related papers
- KGym: A Platform and Dataset to Benchmark Large Language Models on Linux Kernel Crash Resolution [59.20933707301566]
Large Language Models (LLMs) are consistently improving at increasingly realistic software engineering (SE) tasks.
In real-world software stacks, significant SE effort is spent developing foundational system software like the Linux kernel.
To evaluate if ML models are useful while developing such large-scale systems-level software, we introduce kGym and kBench.
arXiv Detail & Related papers (2024-07-02T21:44:22Z) - Guess & Sketch: Language Model Guided Transpilation [59.02147255276078]
Learned transpilation offers an alternative to manual re-writing and engineering efforts.
Probabilistic neural language models (LMs) produce plausible outputs for every input, but do so at the cost of guaranteed correctness.
Guess & Sketch extracts alignment and confidence information from features of the LM then passes it to a symbolic solver to resolve semantic equivalence.
arXiv Detail & Related papers (2023-09-25T15:42:18Z) - INR-Arch: A Dataflow Architecture and Compiler for Arbitrary-Order
Gradient Computations in Implicit Neural Representation Processing [66.00729477511219]
Given a function represented as a computation graph, traditional architectures face challenges in efficiently computing its nth-order gradient.
We introduce INR-Arch, a framework that transforms the computation graph of an nth-order gradient into a hardware-optimized dataflow architecture.
We present results that demonstrate 1.8-4.8x and 1.5-3.6x speedup compared to CPU and GPU baselines respectively.
arXiv Detail & Related papers (2023-08-11T04:24:39Z) - PowerFusion: A Tensor Compiler with Explicit Data Movement Description
and Instruction-level Graph IR [10.059491353103526]
We propose IntelliGen, a tensor compiler that can generate high-performance code for memory-intensive operators.
IntelliGen considers both computation and data movement optimizations.
We evaluate IntelliGen on NVIDIA GPU, AMD GPU, and Cambricon MLU, showing speedup up to 1.97x, 2.93x, and 16.91x (1.28x, 1.23x, and 2.31x on average)
arXiv Detail & Related papers (2023-07-11T03:17:40Z) - Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures [67.47328776279204]
This work introduces a framework to develop efficient, portable Deep Learning and High Performance Computing kernels.
We decompose the kernel development in two steps: 1) Expressing the computational core using Processing Primitives (TPPs) and 2) Expressing the logical loops around TPPs in a high-level, declarative fashion.
We demonstrate the efficacy of our approach using standalone kernels and end-to-end workloads that outperform state-of-the-art implementations on diverse CPU platforms.
arXiv Detail & Related papers (2023-04-25T05:04:44Z) - SparseTIR: Composable Abstractions for Sparse Compilation in Deep
Learning [11.251022748134215]
Sparse tensor compilers simplify the development of operators, but efficient sparse compilation for deep learning remains challenging.
We show that the key to addressing both challenges is two forms of composability.
In this paper, we propose SparseTIR, a sparse tensor compilation abstraction that offers composable formats and composable transformations.
arXiv Detail & Related papers (2022-07-11T03:49:53Z) - The CoRa Tensor Compiler: Compilation for Ragged Tensors with Minimal
Padding [14.635810503599759]
CoRa is a tensor compiler that allows users to easily generate efficient code for ragged tensor operators.
We evaluate CoRa on a variety of operators on ragged tensors as well as on an encoder layer of the transformer model.
arXiv Detail & Related papers (2021-10-19T19:39:04Z) - XDA: Accurate, Robust Disassembly with Transfer Learning [23.716121748941138]
XDA is a transfer-learning-based disassembly framework.
It learns different contextual dependencies present in machine code.
It is up to 38x faster than hand-written disassemblers like IDA Pro.
arXiv Detail & Related papers (2020-10-02T04:14:17Z) - Kernel methods through the roof: handling billions of points efficiently [94.31450736250918]
Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems.
Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections.
Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware.
arXiv Detail & Related papers (2020-06-18T08:16:25Z) - PolyDL: Polyhedral Optimizations for Creation of High Performance DL
primitives [55.79741270235602]
We present compiler algorithms to automatically generate high performance implementations of Deep Learning primitives.
We develop novel data reuse analysis algorithms using the polyhedral model.
We also show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance.
arXiv Detail & Related papers (2020-06-02T06:44:09Z) - TFApprox: Towards a Fast Emulation of DNN Approximate Hardware
Accelerators on GPU [0.4817429789586127]
Energy efficiency of hardware accelerators of deep neural networks (DNN) can be improved by introducing approximate arithmetic circuits.
A software emulation of the DNN accelerator is usually executed on CPU or GPU.
This emulation is typically two or three orders of magnitude slower than a software DNN implementation running on or emulated.
arXiv Detail & Related papers (2020-02-21T08:22:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.