UniHENN: Designing More Versatile Homomorphic Encryption-based CNNs without im2col
- URL: http://arxiv.org/abs/2402.03060v1
- Date: Mon, 5 Feb 2024 14:52:01 GMT
- Title: UniHENN: Designing More Versatile Homomorphic Encryption-based CNNs without im2col
- Authors: Hyunmin Choi, Jihun Kim, Seungho Kim, Seonhye Park, Jeongyong Park, Wonbin Choi, Hyoungshick Kim,
- Abstract summary: Homomorphic encryption enables computations on encrypted data without decryption.
UniHENN is a homomorphic encryption-based CNN architecture, eliminating the need for im2col.
Our experiments demonstrate that UniHENN surpasses the leading 2D CNN inference architecture, PyCrCNN, in inference time.
- Score: 6.496463706588549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Homomorphic encryption enables computations on encrypted data without decryption, which is crucial for privacy-preserving cloud services. However, deploying convolutional neural networks (CNNs) with homomorphic encryption encounters significant challenges, particularly in converting input data into a two-dimensional matrix for convolution, typically achieved using the im2col technique. While efficient, this method limits the variety of deployable CNN models due to compatibility constraints with the encrypted data structure. UniHENN, a homomorphic encryption-based CNN architecture, eliminates the need for im2col, ensuring compatibility with a diverse range of CNN models using homomorphic encryption. Our experiments demonstrate that UniHENN surpasses the leading 2D CNN inference architecture, PyCrCNN, in inference time, as evidenced by its performance on the LeNet-1 dataset, where it averages 30.090 seconds--significantly faster than PyCrCNN's 794.064 seconds. Furthermore, UniHENN outperforms TenSEAL, which employs im2col, in processing concurrent images, an essential feature for high-demand cloud applications. The versatility of UniHENN is proven across various CNN architectures, including 1D and six different 2D CNNs, highlighting its flexibility and efficiency. These qualities establish UniHENN as a promising solution for privacy-preserving, cloud-based CNN services, addressing the increasing demand for scalable, secure, and efficient deep learning in cloud computing environments.
Related papers
- CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation [70.17681136234202]
We reexamine the design distinctions and test the limits of what a sparse CNN can achieve.
We propose two key components, i.e., adaptive receptive fields (spatially) and adaptive relation, to bridge the gap.
This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module.
arXiv Detail & Related papers (2024-03-21T14:06:38Z) - Dynamic Semantic Compression for CNN Inference in Multi-access Edge
Computing: A Graph Reinforcement Learning-based Autoencoder [82.8833476520429]
We propose a novel semantic compression method, autoencoder-based CNN architecture (AECNN) for effective semantic extraction and compression in partial offloading.
In the semantic encoder, we introduce a feature compression module based on the channel attention mechanism in CNNs, to compress intermediate data by selecting the most informative features.
In the semantic decoder, we design a lightweight decoder to reconstruct the intermediate data through learning from the received compressed data to improve accuracy.
arXiv Detail & Related papers (2024-01-19T15:19:47Z) - A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural
Networks [5.274804664403783]
Spiking Neural Networks (SNNs) mimic the behavior of the human brain to improve efficiency and reduce energy consumption.
Homomorphic encryption (HE) offers a solution, allowing calculations to be performed on encrypted data without decrypting it.
This research compares traditional deep neural networks (DNNs) and SNNs using the Brakerski/Fan-Vercauteren (BFV) encryption scheme.
arXiv Detail & Related papers (2023-08-10T15:26:35Z) - High-Resolution Convolutional Neural Networks on Homomorphically
Encrypted Data via Sharding Ciphertexts [0.08999666725996974]
We extend methods for evaluating DCNNs on images with larger dimensions and many channels, beyond what can be stored in single ciphertexts.
We show how existing DCNN models can be regularized during the training process to further improve efficiency and accuracy.
These techniques are applied to homomorphically evaluate a DCNN with high accuracy on the high-resolution ImageNet dataset, achieving $80.2%$ top-1 accuracy.
arXiv Detail & Related papers (2023-06-15T15:16:16Z) - Complex Network for Complex Problems: A comparative study of CNN and
Complex-valued CNN [0.0]
Complex-valued convolutional neural networks (CV-CNN) can preserve the algebraic structure of complex-valued input data.
CV-CNNs have double the number of trainable parameters as real-valued CNNs in terms of the actual number of trainable parameters.
This paper presents a comparative study of CNN, CNNx2 (CNN with double the number of trainable parameters as the CNN), and CV-CNN.
arXiv Detail & Related papers (2023-02-09T11:51:46Z) - Attention-based Feature Compression for CNN Inference Offloading in Edge
Computing [93.67044879636093]
This paper studies the computational offloading of CNN inference in device-edge co-inference systems.
We propose a novel autoencoder-based CNN architecture (AECNN) for effective feature extraction at end-device.
Experiments show that AECNN can compress the intermediate data by more than 256x with only about 4% accuracy loss.
arXiv Detail & Related papers (2022-11-24T18:10:01Z) - Towards a General Purpose CNN for Long Range Dependencies in
$\mathrm{N}$D [49.57261544331683]
We propose a single CNN architecture equipped with continuous convolutional kernels for tasks on arbitrary resolution, dimensionality and length without structural changes.
We show the generality of our approach by applying the same CCNN to a wide set of tasks on sequential (1$mathrmD$) and visual data (2$mathrmD$)
Our CCNN performs competitively and often outperforms the current state-of-the-art across all tasks considered.
arXiv Detail & Related papers (2022-06-07T15:48:02Z) - A Novel Sleep Stage Classification Using CNN Generated by an Efficient
Neural Architecture Search with a New Data Processing Trick [4.365107026636095]
We propose an efficient five-sleep-consuming classification method using convolutional neural networks (CNNs) with a novel data processing trick.
We make full use of genetic algorithm (GA), NASG, to search for the best CNN architecture.
We verify convergence of our data processing trick and compare the performance of traditional CNNs before and after using our trick.
arXiv Detail & Related papers (2021-10-27T10:36:52Z) - Exploring Deep Hybrid Tensor-to-Vector Network Architectures for
Regression Based Speech Enhancement [53.47564132861866]
We find that a hybrid architecture, namely CNN-TT, is capable of maintaining a good quality performance with a reduced model parameter size.
CNN-TT is composed of several convolutional layers at the bottom for feature extraction to improve speech quality.
arXiv Detail & Related papers (2020-07-25T22:21:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.