mlirSynth: Automatic, Retargetable Program Raising in Multi-Level IR
using Program Synthesis
- URL: http://arxiv.org/abs/2310.04196v1
- Date: Fri, 6 Oct 2023 12:21:50 GMT
- Title: mlirSynth: Automatic, Retargetable Program Raising in Multi-Level IR
using Program Synthesis
- Authors: Alexander Brauckmann, Elizabeth Polgreen, Tobias Grosser, Michael F.
P. O'Boyle
- Abstract summary: mlirSynth translates programs from lower-level MLIR dialects to high-level ones without manually defined rules.
We demonstrate its effectiveness reviby raising C programs to two distinct high-level MLIR dialects, which enables us to use existing high-level dialect specific compilation flows.
- Score: 48.01697184432969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: MLIR is an emerging compiler infrastructure for modern hardware, but existing
programs cannot take advantage of MLIR's high-performance compilation if they
are described in lower-level general purpose languages. Consequently, to avoid
programs needing to be rewritten manually, this has led to efforts to
automatically raise lower-level to higher-level dialects in MLIR. However,
current methods rely on manually-defined raising rules, which limit their
applicability and make them challenging to maintain as MLIR dialects evolve.
We present mlirSynth -- a novel approach which translates programs from
lower-level MLIR dialects to high-level ones without manually defined rules.
Instead, it uses available dialect definitions to construct a program space and
searches it effectively using type constraints and equivalences. We demonstrate
its effectiveness \revi{by raising C programs} to two distinct high-level MLIR
dialects, which enables us to use existing high-level dialect specific
compilation flows. On Polybench, we show a greater coverage than previous
approaches, resulting in geomean speedups of 2.5x (Intel) and 3.4x (AMD) over
state-of-the-art compilation flows for the C programming language. mlirSynth
also enables retargetability to domain-specific accelerators, resulting in a
geomean speedup of 21.6x on a TPU.
Related papers
- DSP-MLIR: A MLIR Dialect for Digital Signal Processing [3.1688509302874652]
In this paper, we utilize MLIR framework to introduce a DSP Dialect and perform domain-specific optimizations at dialect -level ( high-level )
We show the performance improvement in execution time for these sample apps by upto 10x which would have been difficult if the IR were at C/ affine level.
arXiv Detail & Related papers (2024-08-20T21:33:17Z) - Forklift: An Extensible Neural Lifter [11.633770744027682]
We propose Forklift, the first neural lifter that learns how to translate assembly to LLVM IR using a token-level encoder-decoder Transformer.
We collect millions of parallel LLVM IR, x86, ARM, and RISC-V programs across compilers and optimization levels to train Forklift and set up an input/output-based accuracy harness.
We evaluate Forklift on two challenging benchmark suites and translate 2.5x more x86 programs than a state-of-the-art hand-written lifter and 4.4x more x86 programs than GPT-4 as well as enabling translation from new ISAs.
arXiv Detail & Related papers (2024-04-01T17:27:58Z) - QParallel: Explicit Parallelism for Programming Quantum Computers [62.10004571940546]
We present a language extension for parallel quantum programming.
QParallel removes ambiguities concerning parallelism in current quantum programming languages.
We introduce a tool that guides programmers in the placement of parallel regions by identifying the subroutines that profit most from parallelization.
arXiv Detail & Related papers (2022-10-07T16:35:16Z) - Improving Mandarin End-to-End Speech Recognition with Word N-gram
Language Model [57.92200214957124]
External language models (LMs) are used to improve the recognition performance of end-to-end (E2E) automatic speech recognition (ASR) systems.
We propose a novel decoding algorithm where a word-level lattice is constructed on-the-fly to consider all possible word sequences.
Our method consistently outperforms subword-level LMs, including N-gram LM and neural network LM.
arXiv Detail & Related papers (2022-01-06T10:04:56Z) - Enabling Retargetable Optimizing Compilers for Quantum Accelerators via
a Multi-Level Intermediate Representation [78.8942067357231]
We present a multi-level quantum-classical intermediate representation (IR) that enables an optimizing, retargetable, ahead-of-time compiler.
We support the entire gate-based OpenQASM 3 language and provide custom extensions for common quantum programming patterns and improved syntax.
Our work results in compile times that are 1000x faster than standard Pythonic approaches, and 5-10x faster than comparative standalone quantum language compilers.
arXiv Detail & Related papers (2021-09-01T17:29:47Z) - A MLIR Dialect for Quantum Assembly Languages [78.8942067357231]
We demonstrate the utility of the Multi-Level Intermediate Representation (MLIR) for quantum computing.
We extend MLIR with a new quantum dialect that enables the expression and compilation of common quantum assembly languages.
We leverage a qcor-enabled implementation of the QIR quantum runtime API to enable a retargetable (quantum hardware agnostic) compiler workflow.
arXiv Detail & Related papers (2021-01-27T13:00:39Z) - Instead of Rewriting Foreign Code for Machine Learning, Automatically
Synthesize Fast Gradients [6.09170287691728]
This paper presents Enzyme, a high-performance automatic differentiation (AD) compiler plugin for the LLVM compiler framework.
Enzyme synthesizes gradients for programs written in any language whose compiler targets LLVM intermediate representation (IR)
On a machine-learning focused benchmark suite including Microsoft's ADBench, AD on optimized IR achieves a geometric mean speedup of 4.5x over AD on IR.
arXiv Detail & Related papers (2020-10-04T22:32:51Z) - Compiling ONNX Neural Network Models Using MLIR [51.903932262028235]
We present a preliminary report on our onnx-mlir compiler, which generates code for the inference of deep neural network models.
Onnx-mlir relies on the Multi-Level Intermediate Representation (MLIR) infrastructure recently integrated in the LLVM project.
arXiv Detail & Related papers (2020-08-19T05:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.