Efficient Representations for Privacy-Preserving Inference
- URL: http://arxiv.org/abs/2110.08321v1
- Date: Fri, 15 Oct 2021 19:03:35 GMT
- Title: Efficient Representations for Privacy-Preserving Inference
- Authors: Han Xuanyuan, Francisco Vargas, Stephen Cummins
- Abstract summary: We construct and evaluate private CNNs on the MNIST and CIFAR-10 datasets.
We achieve over a two-fold reduction in the number of operations used for inferences of the CryptoNets architecture.
- Score: 3.330229314824913
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks have a wide range of applications across multiple
domains such as computer vision and medicine. In many cases, the input of a
model at inference time can consist of sensitive user data, which raises
questions concerning the levels of privacy and trust guaranteed by such
services. Much existing work has leveraged homomorphic encryption (HE) schemes
that enable computation on encrypted data to achieve private inference for
multi-layer perceptrons and CNNs. An early work along this direction was
CryptoNets, which takes 250 seconds for one MNIST inference. The main
limitation of such approaches is that of compute, which is due to the costly
nature of the NTT (number theoretic transform)operations that constitute HE
operations. Others have proposed the use of model pruning and efficient data
representations to reduce the number of HE operations required. In this paper,
we focus on improving upon existing work by proposing changes to the
representations of intermediate tensors during CNN inference. We construct and
evaluate private CNNs on the MNIST and CIFAR-10 datasets, and achieve over a
two-fold reduction in the number of operations used for inferences of the
CryptoNets architecture.
Related papers
- A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural
Networks [5.274804664403783]
Spiking Neural Networks (SNNs) mimic the behavior of the human brain to improve efficiency and reduce energy consumption.
Homomorphic encryption (HE) offers a solution, allowing calculations to be performed on encrypted data without decrypting it.
This research compares traditional deep neural networks (DNNs) and SNNs using the Brakerski/Fan-Vercauteren (BFV) encryption scheme.
arXiv Detail & Related papers (2023-08-10T15:26:35Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - NN-EMD: Efficiently Training Neural Networks using Encrypted
Multi-Sourced Datasets [7.067870969078555]
Training a machine learning model over an encrypted dataset is an existing promising approach to address the privacy-preserving machine learning task.
We propose a novel framework, NN-EMD, to train a deep neural network (DNN) model over multiple datasets collected from multiple sources.
We evaluate our framework for performance with regards to the training time and model accuracy on the MNIST datasets.
arXiv Detail & Related papers (2020-12-18T23:01:20Z) - Towards Scalable and Privacy-Preserving Deep Neural Network via
Algorithmic-Cryptographic Co-design [28.789702559193675]
We propose SPNN - a Scalable and Privacy-preserving deep Neural Network learning framework.
From cryptographic perspective, we propose using two types of cryptographic techniques, i.e., secret sharing and homomorphic encryption.
Experimental results conducted on real-world datasets demonstrate the superiority of SPNN.
arXiv Detail & Related papers (2020-12-17T02:26:16Z) - Training and Inference for Integer-Based Semantic Segmentation Network [18.457074855823315]
We propose a new quantization framework for training and inference of semantic segmentation networks.
Our framework is evaluated on mainstream semantic segmentation networks like FCN-VGG16 and DeepLabv3-ResNet50.
arXiv Detail & Related papers (2020-11-30T02:07:07Z) - POSEIDON: Privacy-Preserving Federated Neural Network Learning [8.103262600715864]
POSEIDON is a first of its kind in the regime of privacy-preserving neural network training.
It employs multiparty lattice-based cryptography to preserve the confidentiality of the training data, the model, and the evaluation data.
It trains a 3-layer neural network on the MNIST dataset with 784 features and 60K samples distributed among 10 parties in less than 2 hours.
arXiv Detail & Related papers (2020-09-01T11:06:31Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.