EinHops: Einsum Notation for Expressive Homomorphic Operations on RNS-CKKS Tensors
- URL: http://arxiv.org/abs/2507.07972v1
- Date: Thu, 10 Jul 2025 17:50:10 GMT
- Title: EinHops: Einsum Notation for Expressive Homomorphic Operations on RNS-CKKS Tensors
- Authors: Karthik Garimella, Austin Ebel, Brandon Reagen,
- Abstract summary: Homomorphic Encryption (FHE) allows computation to be performed directly on encrypted data.<n>FHE provides a limited instruction set: SIMD addition, SIMD multiplication, and cyclic rotation of 1-D vectors.<n>We present EinHops, a minimalist system that factors einsum expressions into a fixed sequence of FHE operations.
- Score: 3.0088450191132394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fully Homomorphic Encryption (FHE) is an encryption scheme that allows for computation to be performed directly on encrypted data, effectively closing the loop on secure and outsourced computing. Data is encrypted not only during rest and transit, but also during processing. However, FHE provides a limited instruction set: SIMD addition, SIMD multiplication, and cyclic rotation of 1-D vectors. This restriction makes performing multi-dimensional tensor operations challenging. Practitioners must pack these tensors into 1-D vectors and map tensor operations onto this one-dimensional layout rather than their traditional nested structure. And while prior systems have made significant strides in automating this process, they often hide critical packing decisions behind layers of abstraction, making debugging, optimizing, and building on top of these systems difficult. In this work, we approach multi-dimensional tensor operations in FHE through Einstein summation (einsum) notation. Einsum notation explicitly encodes dimensional structure and operations in its syntax, naturally exposing how tensors should be packed and transformed. We decompose einsum expressions into a fixed set of FHE-friendly operations. We implement our design and present EinHops, a minimalist system that factors einsum expressions into a fixed sequence of FHE operations. EinHops enables developers to perform encrypted tensor operations using FHE while maintaining full visibility into the underlying packing strategy. We evaluate EinHops on a range of tensor operations from a simple transpose to complex multi-dimensional contractions. We show that the explicit nature of einsum notation allows us to build an FHE tensor system that is simple, general, and interpretable. We open-source EinHops at the following repository: https://github.com/baahl-nyu/einhops.
Related papers
- A Sparse Tensor Generator with Efficient Feature Extraction [1.433758865948252]
A major obstacle in sparse tensor research is the lack of large-scale sparse tensor datasets.<n>We have developed a smart sparse tensor generator that replicates key characteristics of real sparse tensors.<n>We also propose efficient methods for extracting a comprehensive set of sparse tensor features.
arXiv Detail & Related papers (2024-05-08T10:28:20Z) - Dictionary Learning Improves Patch-Free Circuit Discovery in Mechanistic
Interpretability: A Case Study on Othello-GPT [59.245414547751636]
We propose a circuit discovery framework alternative to activation patching.
Our framework suffers less from out-of-distribution and proves to be more efficient in terms of complexity.
We dig in a small transformer trained on a synthetic task named Othello and find a number of human-understandable fine-grained circuits inside of it.
arXiv Detail & Related papers (2024-02-19T15:04:53Z) - An introduction to graphical tensor notation for mechanistic
interpretability [0.0]
It's often easy to get confused about which operations are happening between tensors.
The first half of this document introduces the notation and applies it to some decompositions.
The second half applies it to some existing some foundational approaches for mechanistically understanding language models.
arXiv Detail & Related papers (2024-02-02T02:56:01Z) - Reduce Computational Complexity for Convolutional Layers by Skipping Zeros [9.833821501774596]
We propose an efficient algorithm for convolutional neural networks.
The C-K-S algorithm is accompanied by efficient GPU implementations.
Experiments show that C-K-S offers good performance in terms of speed and convergence.
arXiv Detail & Related papers (2023-06-28T06:21:22Z) - TensorKrowch: Smooth integration of tensor networks in machine learning [46.0920431279359]
We introduceKrowch, an open source Python library built on top of PyTorch.
Krowch allows users to construct any tensor network, train it, and integrate it as a layer in more intricate deep learning models.
arXiv Detail & Related papers (2023-06-14T15:55:19Z) - Softmax-free Linear Transformers [90.83157268265654]
Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks.
Existing methods are either theoretically flawed or empirically ineffective for visual recognition.
We propose a family of Softmax-Free Transformers (SOFT)
arXiv Detail & Related papers (2022-07-05T03:08:27Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - Stack operation of tensor networks [10.86105335102537]
We propose a mathematically rigorous definition for the tensor network stack approach.
We illustrate the main ideas with the matrix product states based machine learning as an example.
arXiv Detail & Related papers (2022-03-28T12:45:13Z) - The CoRa Tensor Compiler: Compilation for Ragged Tensors with Minimal
Padding [14.635810503599759]
CoRa is a tensor compiler that allows users to easily generate efficient code for ragged tensor operators.
We evaluate CoRa on a variety of operators on ragged tensors as well as on an encoder layer of the transformer model.
arXiv Detail & Related papers (2021-10-19T19:39:04Z) - X-volution: On the unification of convolution and self-attention [52.80459687846842]
We propose a multi-branch elementary module composed of both convolution and self-attention operation.
The proposed X-volution achieves highly competitive visual understanding improvements.
arXiv Detail & Related papers (2021-06-04T04:32:02Z) - Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation [53.95297550117153]
We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
arXiv Detail & Related papers (2021-05-29T08:39:57Z) - Named Tensor Notation [117.30373263410507]
We propose a notation for tensors with named axes.
It relieves the author, reader, and future implementers from the burden of keeping track of the order of axes.
It also makes it easy to extend operations on low-order tensors to higher order ones.
arXiv Detail & Related papers (2021-02-25T22:21:30Z) - Tensor Relational Algebra for Machine Learning System Design [7.764107702934616]
We present an alternative implementation abstraction called the relational tensor algebra (TRA)
TRA is a set-based algebra based on the relational algebra.
Our empirical study shows that the optimized TRA-based back-end can significantly outperform alternatives for running ML in distributed clusters.
arXiv Detail & Related papers (2020-09-01T15:51:24Z) - Spectral Learning on Matrices and Tensors [74.88243719463053]
We show that tensor decomposition can pick up latent effects that are missed by matrix methods.
We also outline computational techniques to design efficient tensor decomposition methods.
arXiv Detail & Related papers (2020-04-16T22:53:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.