EVA: An Encrypted Vector Arithmetic Language and Compiler for Efficient
Homomorphic Computation
- URL: http://arxiv.org/abs/1912.11951v2
- Date: Fri, 26 Jun 2020 16:15:19 GMT
- Title: EVA: An Encrypted Vector Arithmetic Language and Compiler for Efficient
Homomorphic Computation
- Authors: Roshan Dathathri, Blagovesta Kostova, Olli Saarikivi, Wei Dai, Kim
Laine, Madanlal Musuvathi
- Abstract summary: This paper presents a new FHE language called Encrypted Vector Arithmetic (EVA)
EVA includes an optimizing compiler that generates correct and secure FHE programs.
programmers can develop efficient general-purpose FHE applications directly in EVA.
- Score: 11.046862694768894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully-Homomorphic Encryption (FHE) offers powerful capabilities by enabling
secure offloading of both storage and computation, and recent innovations in
schemes and implementations have made it all the more attractive. At the same
time, FHE is notoriously hard to use with a very constrained programming model,
a very unusual performance profile, and many cryptographic constraints.
Existing compilers for FHE either target simpler but less efficient FHE schemes
or only support specific domains where they can rely on expert-provided
high-level runtimes to hide complications.
This paper presents a new FHE language called Encrypted Vector Arithmetic
(EVA), which includes an optimizing compiler that generates correct and secure
FHE programs, while hiding all the complexities of the target FHE scheme.
Bolstered by our optimizing compiler, programmers can develop efficient
general-purpose FHE applications directly in EVA. For example, we have
developed image processing applications using EVA, with a very few lines of
code.
EVA is designed to also work as an intermediate representation that can be a
target for compiling higher-level domain-specific languages. To demonstrate
this, we have re-targeted CHET, an existing domain-specific compiler for neural
network inference, onto EVA. Due to the novel optimizations in EVA, its
programs are on average 5.3x faster than those generated by CHET. We believe
that EVA would enable a wider adoption of FHE by making it easier to develop
FHE applications and domain-specific FHE compilers.
Related papers
- A Method for Efficient Heterogeneous Parallel Compilation: A Cryptography Case Study [8.06660833012594]
This paper introduces a novel MLIR-based dialect, named hyper, designed to optimize data management and parallel computation across diverse hardware architectures.
We present HETOCompiler, a cryptography-focused compiler prototype that implements multiple hash algorithms and enables their execution on heterogeneous systems.
arXiv Detail & Related papers (2024-07-12T15:12:51Z) - BoostCom: Towards Efficient Universal Fully Homomorphic Encryption by Boosting the Word-wise Comparisons [14.399750086329345]
Fully Homomorphic Encryption (FHE) allows for the execution of computations on encrypted data without the need to decrypt it first.
In this paper, we introduce BoostCom, a scheme designed to speed up word-wise comparison operations.
We achieve an end-to-end performance improvement of more than an order of magnitude (11.1x faster) compared to the state-of-the-art CPU-based uFHE systems.
arXiv Detail & Related papers (2024-07-10T02:09:10Z) - A Compiler from Array Programs to Vectorized Homomorphic Encryption [1.6216324006136673]
Homomorphic encryption (HE) is a practical approach to secure computation over encrypted data.
We present Viaduct-HE, a compiler generates efficient vectorized HE programs.
Viaduct-HE can generate both the operations and complex data layouts required for efficient HE programs.
arXiv Detail & Related papers (2023-11-10T16:00:00Z) - Harnessing Deep Learning and HPC Kernels via High-Level Loop and Tensor Abstractions on CPU Architectures [67.47328776279204]
This work introduces a framework to develop efficient, portable Deep Learning and High Performance Computing kernels.
We decompose the kernel development in two steps: 1) Expressing the computational core using Processing Primitives (TPPs) and 2) Expressing the logical loops around TPPs in a high-level, declarative fashion.
We demonstrate the efficacy of our approach using standalone kernels and end-to-end workloads that outperform state-of-the-art implementations on diverse CPU platforms.
arXiv Detail & Related papers (2023-04-25T05:04:44Z) - Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
Heterogeneous Memory Architectures [68.91874045918112]
adapter-ALBERT is an efficient model optimization for maximal data reuse across different tasks.
We demonstrate the advantage of mapping the model to a heterogeneous on-chip memory architecture by performing simulations on a validated NLP edge accelerator.
arXiv Detail & Related papers (2023-03-25T14:40:59Z) - Boosting Neural Networks to Decompile Optimized Binaries [13.255618541522436]
Decompilation aims to transform a low-level program language (LPL) into its functionally-equivalent high-level program language (HPL)
We propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries.
arXiv Detail & Related papers (2023-01-03T06:45:54Z) - Enabling Retargetable Optimizing Compilers for Quantum Accelerators via
a Multi-Level Intermediate Representation [78.8942067357231]
We present a multi-level quantum-classical intermediate representation (IR) that enables an optimizing, retargetable, ahead-of-time compiler.
We support the entire gate-based OpenQASM 3 language and provide custom extensions for common quantum programming patterns and improved syntax.
Our work results in compile times that are 1000x faster than standard Pythonic approaches, and 5-10x faster than comparative standalone quantum language compilers.
arXiv Detail & Related papers (2021-09-01T17:29:47Z) - Instead of Rewriting Foreign Code for Machine Learning, Automatically
Synthesize Fast Gradients [6.09170287691728]
This paper presents Enzyme, a high-performance automatic differentiation (AD) compiler plugin for the LLVM compiler framework.
Enzyme synthesizes gradients for programs written in any language whose compiler targets LLVM intermediate representation (IR)
On a machine-learning focused benchmark suite including Microsoft's ADBench, AD on optimized IR achieves a geometric mean speedup of 4.5x over AD on IR.
arXiv Detail & Related papers (2020-10-04T22:32:51Z) - Efficient Learning of Generative Models via Finite-Difference Score
Matching [111.55998083406134]
We present a generic strategy to efficiently approximate any-order directional derivative with finite difference.
Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations.
arXiv Detail & Related papers (2020-07-07T10:05:01Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - PolyDL: Polyhedral Optimizations for Creation of High Performance DL
primitives [55.79741270235602]
We present compiler algorithms to automatically generate high performance implementations of Deep Learning primitives.
We develop novel data reuse analysis algorithms using the polyhedral model.
We also show that such a hybrid compiler plus a minimal library-use approach results in state-of-the-art performance.
arXiv Detail & Related papers (2020-06-02T06:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.