High-Resolution Convolutional Neural Networks on Homomorphically
Encrypted Data via Sharding Ciphertexts
- URL: http://arxiv.org/abs/2306.09189v2
- Date: Mon, 29 Jan 2024 03:20:21 GMT
- Title: High-Resolution Convolutional Neural Networks on Homomorphically
Encrypted Data via Sharding Ciphertexts
- Authors: Vivian Maloney, Richard F. Obrecht, Vikram Saraph, Prathibha Rama,
Kate Tallaksen
- Abstract summary: We extend methods for evaluating DCNNs on images with larger dimensions and many channels, beyond what can be stored in single ciphertexts.
We show how existing DCNN models can be regularized during the training process to further improve efficiency and accuracy.
These techniques are applied to homomorphically evaluate a DCNN with high accuracy on the high-resolution ImageNet dataset, achieving $80.2%$ top-1 accuracy.
- Score: 0.08999666725996974
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, Deep Convolutional Neural Networks (DCNNs) including the ResNet-20
architecture have been privately evaluated on encrypted, low-resolution data
with the Residue-Number-System Cheon-Kim-Kim-Song (RNS-CKKS) homomorphic
encryption scheme. We extend methods for evaluating DCNNs on images with larger
dimensions and many channels, beyond what can be stored in single ciphertexts.
Additionally, we simplify and improve the efficiency of the recently introduced
multiplexed image format, demonstrating that homomorphic evaluation can work
with standard, row-major matrix packing and results in encrypted inference time
speedups by $4.6-6.5\times$. We also show how existing DCNN models can be
regularized during the training process to further improve efficiency and
accuracy. These techniques are applied to homomorphically evaluate a DCNN with
high accuracy on the high-resolution ImageNet dataset, achieving $80.2\%$ top-1
accuracy. We also achieve an accuracy of homomorphically evaluated CNNs on the
CIFAR-10 dataset of $98.3\%$.
Related papers
- UniHENN: Designing Faster and More Versatile Homomorphic Encryption-based CNNs without im2col [6.496463706588549]
Homomorphic encryption (HE) enables privacy-preserving deep learning by allowing computations on encrypted data without decryption.
deploying convolutional neural networks (CNNs) with HE is challenging due to the need to convert input data into a two-dimensional matrix for convolution using the im2col technique.
UniHENN is a novel HE-based CNN architecture that eliminates the need for im2col, enhancing its versatility and compatibility with a broader range of CNN models.
arXiv Detail & Related papers (2024-02-05T14:52:01Z) - Efficient Privacy-Preserving Convolutional Spiking Neural Networks with
FHE [1.437446768735628]
Homomorphic Encryption (FHE) is a key technology for privacy-preserving computation.
FHE has limitations in processing continuous non-polynomial functions.
We present a framework called FHE-DiCSNN for homomorphic SNNs.
FHE-DiCSNN achieves an accuracy of 97.94% on ciphertexts, with a loss of only 0.53% compared to the original network's accuracy of 98.47%.
arXiv Detail & Related papers (2023-09-16T15:37:18Z) - HyPHEN: A Hybrid Packing Method and Optimizations for Homomorphic
Encryption-Based Neural Networks [7.642103082787977]
Convolutional neural network (CNN) inference using fully homomorphic encryption (FHE) is a promising private inference (PI) solution.
We present HyPHEN, a deep HCNN construction that incorporates novel convolution algorithms and data packing methods.
As a result, HyPHEN brings the latency of HCNN CIFAR-10 inference down to a practical level at 1.4 seconds (ResNet-20) and demonstrates HCNN ImageNet inference for the first time at 14.7 seconds (ResNet-18).
arXiv Detail & Related papers (2023-02-05T15:36:51Z) - Attention-based Feature Compression for CNN Inference Offloading in Edge
Computing [93.67044879636093]
This paper studies the computational offloading of CNN inference in device-edge co-inference systems.
We propose a novel autoencoder-based CNN architecture (AECNN) for effective feature extraction at end-device.
Experiments show that AECNN can compress the intermediate data by more than 256x with only about 4% accuracy loss.
arXiv Detail & Related papers (2022-11-24T18:10:01Z) - A heterogeneous group CNN for image super-resolution [127.2132400582117]
Convolutional neural networks (CNNs) have obtained remarkable performance via deep architectures.
We present a heterogeneous group SR CNN (HGSRCNN) via leveraging structure information of different types to obtain a high-quality image.
arXiv Detail & Related papers (2022-09-26T04:14:59Z) - Towards a General Purpose CNN for Long Range Dependencies in
$\mathrm{N}$D [49.57261544331683]
We propose a single CNN architecture equipped with continuous convolutional kernels for tasks on arbitrary resolution, dimensionality and length without structural changes.
We show the generality of our approach by applying the same CCNN to a wide set of tasks on sequential (1$mathrmD$) and visual data (2$mathrmD$)
Our CCNN performs competitively and often outperforms the current state-of-the-art across all tasks considered.
arXiv Detail & Related papers (2022-06-07T15:48:02Z) - Toward Compact Parameter Representations for Architecture-Agnostic
Neural Network Compression [26.501979992447605]
This paper investigates compression from the perspective of compactly representing and storing trained parameters.
We leverage additive quantization, an extreme lossy compression method invented for image descriptors, to compactly represent the parameters.
We conduct experiments on MobileNet-v2, VGG-11, ResNet-50, Feature Pyramid Networks, and pruned DNNs trained for classification, detection, and segmentation tasks.
arXiv Detail & Related papers (2021-11-19T17:03:11Z) - Deep Neural Networks are Surprisingly Reversible: A Baseline for
Zero-Shot Inversion [90.65667807498086]
This paper presents a zero-shot direct model inversion framework that recovers the input to the trained model given only the internal representation.
We empirically show that modern classification models on ImageNet can, surprisingly, be inverted, allowing an approximate recovery of the original 224x224px images from a representation after more than 20 layers.
arXiv Detail & Related papers (2021-07-13T18:01:43Z) - DCT-SNN: Using DCT to Distribute Spatial Information over Time for
Learning Low-Latency Spiking Neural Networks [7.876001630578417]
Spiking Neural Networks (SNNs) offer a promising alternative to traditional deep learning frameworks.
SNNs suffer from high inference latency which is a major bottleneck to their deployment.
We propose a scalable time-based encoding scheme that utilizes the Discrete Cosine Transform (DCT) to reduce the number of timesteps required for inference.
arXiv Detail & Related papers (2020-10-05T05:55:34Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.