Optimizing Privacy-Preserving Outsourced Convolutional Neural Network
Predictions
- URL: http://arxiv.org/abs/2002.10944v3
- Date: Mon, 29 Jun 2020 16:52:16 GMT
- Title: Optimizing Privacy-Preserving Outsourced Convolutional Neural Network
Predictions
- Authors: Minghui Li, Sherman S. M. Chow, Shengshan Hu, Yuejing Yan, Chao Shen,
Qian Wang
- Abstract summary: Recent researches focus on the privacy of the query and results, but they do not provide model privacy against the model-hosting server.
This paper proposes a new scheme for privacy-preserving neural network prediction in the outsourced setting.
We leverage two non-colluding servers with secret sharing and triplet generation to minimize the usage of heavyweight cryptography.
- Score: 23.563775490174415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural network is a machine-learning model widely applied in
various prediction tasks, such as computer vision and medical image analysis.
Their great predictive power requires extensive computation, which encourages
model owners to host the prediction service in a cloud platform. Recent
researches focus on the privacy of the query and results, but they do not
provide model privacy against the model-hosting server and may leak partial
information about the results. Some of them further require frequent
interactions with the querier or heavy computation overheads, which discourages
querier from using the prediction service. This paper proposes a new scheme for
privacy-preserving neural network prediction in the outsourced setting, i.e.,
the server cannot learn the query, (intermediate) results, and the model.
Similar to SecureML (S&P'17), a representative work that provides model
privacy, we leverage two non-colluding servers with secret sharing and triplet
generation to minimize the usage of heavyweight cryptography. Further, we adopt
asynchronous computation to improve the throughput, and design garbled circuits
for the non-polynomial activation function to keep the same accuracy as the
underlying network (instead of approximating it). Our experiments on MNIST
dataset show that our scheme achieves an average of 122x, 14.63x, and 36.69x
reduction in latency compared to SecureML, MiniONN (CCS'17), and EzPC
(EuroS&P'19), respectively. For the communication costs, our scheme outperforms
SecureML by 1.09x, MiniONN by 36.69x, and EzPC by 31.32x on average. On the
CIFAR dataset, our scheme achieves a lower latency by a factor of 7.14x and
3.48x compared to MiniONN and EzPC, respectively. Our scheme also provides
13.88x and 77.46x lower communication costs than MiniONN and EzPC on the CIFAR
dataset.
Related papers
- Hawk: Accurate and Fast Privacy-Preserving Machine Learning Using Secure Lookup Table Computation [11.265356632908846]
Training machine learning models on data from multiple entities without direct data sharing can unlock applications otherwise hindered by business, legal, or ethical constraints.
We design and implement new privacy-preserving machine learning protocols for logistic regression and neural network models.
Our evaluations show that our logistic regression protocol is up to 9x faster, and the neural network training is up to 688x faster than SecureML.
arXiv Detail & Related papers (2024-03-26T00:51:12Z) - Deep Learning for Day Forecasts from Sparse Observations [60.041805328514876]
Deep neural networks offer an alternative paradigm for modeling weather conditions.
MetNet-3 learns from both dense and sparse data sensors and makes predictions up to 24 hours ahead for precipitation, wind, temperature and dew point.
MetNet-3 has a high temporal and spatial resolution, respectively, up to 2 minutes and 1 km as well as a low operational latency.
arXiv Detail & Related papers (2023-06-06T07:07:54Z) - DNNAbacus: Toward Accurate Computational Cost Prediction for Deep Neural
Networks [0.9896984829010892]
This paper investigates the computational resource demands of 29 classical deep neural networks and builds accurate models for predicting computational costs.
We propose a lightweight prediction approach DNNAbacus with a novel network structural matrix for network representation.
Our experimental results show that the mean relative error (MRE) is 0.9% with respect to time and 2.8% with respect to memory for 29 classic models, which is much lower than the state-of-the-art works.
arXiv Detail & Related papers (2022-05-24T14:21:27Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - PhiNets: a scalable backbone for low-power AI at the edge [2.7910505923792646]
We present PhiNets, a new scalable backbone optimized for deep-learning-based image processing on resource-constrained platforms.
PhiNets are based on inverted residual blocks specifically designed to decouple the computational cost, working memory, and parameter memory.
We demonstrate our approach on a prototype node based on a STM32H743 microcontroller.
arXiv Detail & Related papers (2021-10-01T12:03:25Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Trust-Based Cloud Machine Learning Model Selection For Industrial IoT
and Smart City Services [5.333802479607541]
We consider the paradigm where cloud service providers collect big data from resource-constrained devices for building Machine Learning prediction models.
Our proposed solution comprises an intelligent-time reconfiguration that maximizes the level of trust of ML models.
Our results show that the selected model's trust level is 0.7% to 2.53% less compared to the results obtained using ILP.
arXiv Detail & Related papers (2020-08-11T23:58:03Z) - ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function
Secret Sharing [2.6228228854413356]
AriaNN is a low-interaction privacy-preserving framework for private neural network training and inference on sensitive data.
We design primitives for the building blocks of neural networks such as ReLU, MaxPool and BatchNorm.
We implement our framework as an extension to support n-party private federated learning.
arXiv Detail & Related papers (2020-06-08T13:40:27Z) - A Privacy-Preserving-Oriented DNN Pruning and Mobile Acceleration
Framework [56.57225686288006]
Weight pruning of deep neural networks (DNNs) has been proposed to satisfy the limited storage and computing capability of mobile edge devices.
Previous pruning methods mainly focus on reducing the model size and/or improving performance without considering the privacy of user data.
We propose a privacy-preserving-oriented pruning and mobile acceleration framework that does not require the private training dataset.
arXiv Detail & Related papers (2020-03-13T23:52:03Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.