HE is all you need: Compressing FHE Ciphertexts using Additive HE
- URL: http://arxiv.org/abs/2303.09043v2
- Date: Sun, 28 Jul 2024 19:22:13 GMT
- Title: HE is all you need: Compressing FHE Ciphertexts using Additive HE
- Authors: Rasoul Akhavan Mahdavi, Abdulrahman Diaa, Florian Kerschbaum,
- Abstract summary: Homomorphic Encryption (HE) is a commonly used tool for building privacy-preserving applications.
We present a new compression technique that uses an additive homomorphic encryption scheme with small ciphertexts to compress large homomorphic ciphertexts.
- Score: 29.043858170208875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Homomorphic Encryption (HE) is a commonly used tool for building privacy-preserving applications. However, in scenarios with many clients and high-latency networks, communication costs due to large ciphertext sizes are the bottleneck. In this paper, we present a new compression technique that uses an additive homomorphic encryption scheme with small ciphertexts to compress large homomorphic ciphertexts based on Learning with Errors (LWE). Our technique exploits the linear step in the decryption of such ciphertexts to delegate part of the decryption to the server. We achieve compression ratios up to 90% which only requires a small compression key. By compressing multiple ciphertexts simultaneously, we can over 99\% compression rate. Our compression technique can be readily applied to applications which transmit LWE ciphertexts from the server to the client as the response to a query. Furthermore, we apply our technique to private information retrieval (PIR) where a client accesses a database without revealing its query. Using our compression technique, we propose ZipPIR, a PIR protocol which achieves the lowest overall communication cost among all protocols in the literature. ZipPIR does not require any communication with the client in the preprocessing phase, making it a great solution for use cases of PIR with ephemeral clients or high-latency networks.
Related papers
- FineZip : Pushing the Limits of Large Language Models for Practical Lossless Text Compression [1.9699843876565526]
FineZip is a novel text compression system that combines ideas of online memorization and dynamic context to reduce the compression time immensely.
FineZip can compress the above corpus in approximately 4 hours compared to 9.5 days, a 54 times improvement over LLMZip and comparable performance.
arXiv Detail & Related papers (2024-09-25T17:58:35Z) - Concise and Precise Context Compression for Tool-Using Language Models [60.606281074373136]
We propose two strategies for compressing tool documentation into concise and precise summary sequences for tool-using language models.
Results on API-Bank and APIBench show that our approach reaches a performance comparable to the upper-bound baseline under up to 16x compression ratio.
arXiv Detail & Related papers (2024-07-02T08:17:00Z) - UniCompress: Enhancing Multi-Data Medical Image Compression with Knowledge Distillation [59.3877309501938]
Implicit Neural Representation (INR) networks have shown remarkable versatility due to their flexible compression ratios.
We introduce a codebook containing frequency domain information as a prior input to the INR network.
This enhances the representational power of INR and provides distinctive conditioning for different image blocks.
arXiv Detail & Related papers (2024-05-27T05:52:13Z) - Secure Inference for Vertically Partitioned Data Using Multiparty Homomorphic Encryption [15.867269549049428]
We propose a secure inference protocol for a distributed setting involving a single server node and multiple client nodes.
We assume that the observed data vector is partitioned across multiple client nodes while the deep learning model is located at the server node.
arXiv Detail & Related papers (2024-05-06T18:17:27Z) - Training LLMs over Neurally Compressed Text [55.11828645767342]
This paper explores the idea of training large language models (LLMs) over highly compressed text.
We propose Equal-Info Windows, a novel compression technique whereby text is segmented into blocks that each compress to the same bit length.
We demonstrate effective learning over neurally compressed text that improves with scale, and outperforms byte-level baselines by a wide margin on perplexity and inference speed benchmarks.
arXiv Detail & Related papers (2024-04-04T17:48:28Z) - LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression [43.048684907893104]
This paper focuses on task-agnostic prompt compression for better generalizability and efficiency.
We formulate prompt compression as a token classification problem to guarantee the faithfulness of the compressed prompt to the original one.
Our approach leads to lower latency by explicitly learning the compression objective with smaller models such as XLM-RoBERTa-large and mBERT.
arXiv Detail & Related papers (2024-03-19T17:59:56Z) - Unrolled Compressed Blind-Deconvolution [77.88847247301682]
sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging.
We propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
arXiv Detail & Related papers (2022-09-28T15:16:58Z) - COIN++: Data Agnostic Neural Compression [55.27113889737545]
COIN++ is a neural compression framework that seamlessly handles a wide range of data modalities.
We demonstrate the effectiveness of our method by compressing various data modalities.
arXiv Detail & Related papers (2022-01-30T20:12:04Z) - FFConv: Fast Factorized Neural Network Inference on Encrypted Data [9.868787266501036]
We propose a low-rank factorization method called FFConv to unify convolution and ciphertext packing.
Compared to prior art LoLa and Falcon, our method reduces the inference latency by up to 87% and 12%, respectively.
arXiv Detail & Related papers (2021-02-06T03:10:13Z) - HERS: Homomorphically Encrypted Representation Search [56.87295029135185]
We present a method to search for a probe (or query) image representation against a large gallery in the encrypted domain.
Our encryption scheme is agnostic to how the fixed-length representation is obtained and can therefore be applied to any fixed-length representation in any application domain.
arXiv Detail & Related papers (2020-03-27T01:10:54Z) - Uncertainty Principle for Communication Compression in Distributed and
Federated Learning and the Search for an Optimal Compressor [5.09755285351264]
We consider an unbiased compression method inspired by the Kashin representation of vectors, which we call em Kashin compression (KC).
KC enjoys a em dimension independent variance bound for which we derive an explicit formula even in the regime when only a few bits need to be communicate per each vector entry.
arXiv Detail & Related papers (2020-02-20T17:20:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.