TFHE-SBC: Software Designs for Fully Homomorphic Encryption over the Torus on Single Board Computers
- URL: http://arxiv.org/abs/2503.02559v2
- Date: Sat, 26 Apr 2025 04:01:00 GMT
- Title: TFHE-SBC: Software Designs for Fully Homomorphic Encryption over the Torus on Single Board Computers
- Authors: Marin Matsumoto, Ai Nozaki, Hideki Takase, Masato Oguchi,
- Abstract summary: homomorphic encryption (FHE) enables statistical processing and machine learning while protecting data.<n> TFHE requires Torus Learning With Error (TLWE) encryption, which encrypts one bit at a time, leading to less efficient encryption and larger ciphertext size.<n>We propose a novel SBC-specific design, textsfTFHE-SBC, to accelerate client-side TFHE operations and enhance communication and energy efficiency.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully homomorphic encryption (FHE) is a technique that enables statistical processing and machine learning while protecting data, including sensitive information collected by single board computers (SBCs), on a cloud server. Among FHE schemes, the TFHE scheme is capable of homomorphic NAND operations and, unlike other FHE schemes, can perform various operations such as minimum, maximum, and comparison. However, TFHE requires Torus Learning With Error (TLWE) encryption, which encrypts one bit at a time, leading to less efficient encryption and larger ciphertext size compared to other schemes. Additionally, SBCs have a limited number of hardware accelerators compared to servers, making it challenging to achieve the same level of optimization as on servers. In this study, we propose a novel SBC-specific design, \textsf{TFHE-SBC}, to accelerate client-side TFHE operations and enhance communication and energy efficiency. Experimental results demonstrate that \textsf{TFHE-SBC} encryption is up to 2486 times faster, improves communication efficiency by 512 times, and achieves 12 to 2004 times greater energy efficiency than the state-of-the-art.
Related papers
- Decoder-Hybrid-Decoder Architecture for Efficient Reasoning with Long Generation [129.45368843861917]
We introduce the Gated Memory Unit (GMU), a simple yet effective mechanism for efficient memory sharing across layers.<n>We apply it to create SambaY, a decoder-hybrid-decoder architecture that incorporates GMUs to share memory readout states from a Samba-based self-decoder.
arXiv Detail & Related papers (2025-07-09T07:27:00Z) - CIPHERMATCH: Accelerating Homomorphic Encryption-Based String Matching via Memory-Efficient Data Packing and In-Flash Processing [8.114331115730021]
Homomorphic encryption (HE) allows secure computation on encrypted data without revealing the original data.
Many cloud computing applications (e.g., DNA read mapping, biometric matching, web search) use exact string matching as a key operation.
Prior string matching algorithms that use homomorphic encryption are limited by high computational latency.
arXiv Detail & Related papers (2025-03-12T00:25:58Z) - Cryptanalysis via Machine Learning Based Information Theoretic Metrics [58.96805474751668]
We propose two novel applications of machine learning (ML) algorithms to perform cryptanalysis on any cryptosystem.
These algorithms can be readily applied in an audit setting to evaluate the robustness of a cryptosystem.
We show that our classification model correctly identifies the encryption schemes that are not IND-CPA secure, such as DES, RSA, and AES ECB, with high accuracy.
arXiv Detail & Related papers (2025-01-25T04:53:36Z) - Hades: Homomorphic Augmented Decryption for Efficient Symbol-comparison -- A Database's Perspective [1.3824176915623292]
This paper introduces HADES, a novel cryptographic framework that enables efficient and secure comparisons on encrypted data.<n>Based on the Ring Learning with Errors (RLWE) problem, HADES provides CPA-security and incorporates perturbation-aware encryption to mitigate frequency-analysis attacks.
arXiv Detail & Related papers (2024-12-28T02:47:14Z) - Nemesis: Noise-randomized Encryption with Modular Efficiency and Secure Integration in Machine Learning Systems [1.3824176915623292]
Nemesis is a framework that accelerates FHE-based machine learning systems without compromising accuracy or security.<n>We prove the security of Nemesis under standard cryptographic assumptions.<n>Results show that Nemesis significantly reduces the computational overhead of FHE-based ML systems.
arXiv Detail & Related papers (2024-12-18T22:52:12Z) - Efficient Homomorphically Encrypted Convolutional Neural Network Without Rotation [6.03124479597323]
This paper proposes a novel reformulated joint procedure and a new filter coefficient packing scheme to eliminate ciphertext rotations without affecting the security of the HE scheme.
For various plain-20s over the CIFAR-10/100 datasets, our design reduces the running time of the Conv and FC layers by 15.5% and the communication cost between client and server by more than 50%, compared to the best prior design.
arXiv Detail & Related papers (2024-09-08T19:46:25Z) - Efficient Encoder-Decoder Transformer Decoding for Decomposable Tasks [53.550782959908524]
We introduce a new configuration for encoder-decoder models that improves efficiency on structured output and decomposable tasks.
Our method, prompt-in-decoder (PiD), encodes the input once and decodes the output in parallel, boosting both training and inference efficiency.
arXiv Detail & Related papers (2024-03-19T19:27:23Z) - SOCI^+: An Enhanced Toolkit for Secure OutsourcedComputation on Integers [50.608828039206365]
We propose SOCI+ which significantly improves the performance of SOCI.
SOCI+ employs a novel (2, 2)-threshold Paillier cryptosystem with fast encryption and decryption as its cryptographic primitive.
Compared with SOCI, our experimental evaluation shows that SOCI+ is up to 5.4 times more efficient in computation and 40% less in communication overhead.
arXiv Detail & Related papers (2023-09-27T05:19:32Z) - Blockwise Parallel Transformer for Large Context Models [70.97386897478238]
Blockwise Parallel Transformer (BPT) is a blockwise computation of self-attention and feedforward network fusion to minimize memory costs.
By processing longer input sequences while maintaining memory efficiency, BPT enables training sequences 32 times longer than vanilla Transformers and up to 4 times longer than previous memory-efficient methods.
arXiv Detail & Related papers (2023-05-30T19:25:51Z) - Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and
Algorithm Co-design [66.39546326221176]
Attention-based neural networks have become pervasive in many AI tasks.
The use of the attention mechanism and feed-forward network (FFN) demands excessive computational and memory resources.
This paper proposes a hardware-friendly variant that adopts a unified butterfly sparsity pattern to approximate both the attention mechanism and the FFNs.
arXiv Detail & Related papers (2022-09-20T09:28:26Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - FFConv: Fast Factorized Neural Network Inference on Encrypted Data [9.868787266501036]
We propose a low-rank factorization method called FFConv to unify convolution and ciphertext packing.
Compared to prior art LoLa and Falcon, our method reduces the inference latency by up to 87% and 12%, respectively.
arXiv Detail & Related papers (2021-02-06T03:10:13Z) - Faster Secure Data Mining via Distributed Homomorphic Encryption [108.77460689459247]
Homomorphic Encryption (HE) is receiving more and more attention recently for its capability to do computations over the encrypted field.
We propose a novel general distributed HE-based data mining framework towards one step of solving the scaling problem.
We verify the efficiency and effectiveness of our new framework by testing over various data mining algorithms and benchmark data-sets.
arXiv Detail & Related papers (2020-06-17T18:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.