ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function
Secret Sharing
- URL: http://arxiv.org/abs/2006.04593v4
- Date: Thu, 28 Oct 2021 09:09:16 GMT
- Title: ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function
Secret Sharing
- Authors: Th\'eo Ryffel, Pierre Tholoniat, David Pointcheval and Francis Bach
- Abstract summary: AriaNN is a low-interaction privacy-preserving framework for private neural network training and inference on sensitive data.
We design primitives for the building blocks of neural networks such as ReLU, MaxPool and BatchNorm.
We implement our framework as an extension to support n-party private federated learning.
- Score: 2.6228228854413356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose AriaNN, a low-interaction privacy-preserving framework for private
neural network training and inference on sensitive data. Our semi-honest
2-party computation protocol (with a trusted dealer) leverages function secret
sharing, a recent lightweight cryptographic protocol that allows us to achieve
an efficient online phase. We design optimized primitives for the building
blocks of neural networks such as ReLU, MaxPool and BatchNorm. For instance, we
perform private comparison for ReLU operations with a single message of the
size of the input during the online phase, and with preprocessing keys close to
4X smaller than previous work. Last, we propose an extension to support n-party
private federated learning. We implement our framework as an extensible system
on top of PyTorch that leverages CPU and GPU hardware acceleration for
cryptographic and machine learning operations. We evaluate our end-to-end
system for private inference between distant servers on standard neural
networks such as AlexNet, VGG16 or ResNet18, and for private training on
smaller networks like LeNet. We show that computation rather than communication
is the main bottleneck and that using GPUs together with reduced key size is a
promising solution to overcome this barrier.
Related papers
- Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - DEFER: Distributed Edge Inference for Deep Neural Networks [5.672898304129217]
We present DEFER, a framework for distributed edge inference.
It partitions deep neural networks into layers that can be spread across multiple compute nodes.
We find that for the ResNet50 model, the inference throughput of DEFER with 8 compute nodes is 53% higher and per node energy consumption is 63% lower than single device inference.
arXiv Detail & Related papers (2022-01-18T06:50:45Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - MORSE-STF: A Privacy Preserving Computation System [12.875477499515158]
We present Secure-TF, a privacy-preserving machine learning framework based on MPC.
Our framework is able to support widely-used machine learning models such as logistic regression, fully-connected neural network, and convolutional neural network.
arXiv Detail & Related papers (2021-09-24T03:42:46Z) - Sphynx: ReLU-Efficient Network Design for Private Inference [49.73927340643812]
We focus on private inference (PI), where the goal is to perform inference on a user's data sample using a service provider's model.
Existing PI methods for deep networks enable cryptographically secure inference with little drop in functionality.
This paper presents Sphynx, a ReLU-efficient network design method based on micro-search strategies for convolutional cell design.
arXiv Detail & Related papers (2021-06-17T18:11:10Z) - CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU [8.633428365391666]
CryptGPU is a system for privacy-preserving machine learning that implements all operations on the GPU.
We introduce a new interface to embed cryptographic operations over secret-shared values into floating-point operations.
We show that our protocols achieve a 2x to 8x improvement in private inference and a 6x to 36x improvement for private training.
arXiv Detail & Related papers (2021-04-22T09:21:40Z) - S++: A Fast and Deployable Secure-Computation Framework for
Privacy-Preserving Neural Network Training [0.4893345190925178]
We introduce S++, a simple, robust, and deployable framework for training a neural network (NN) using private data from multiple sources.
For the first time, we provide fast and verifiable protocols for all common activation functions and optimize them for running in a secret-shared manner.
arXiv Detail & Related papers (2021-01-28T15:48:54Z) - POSEIDON: Privacy-Preserving Federated Neural Network Learning [8.103262600715864]
POSEIDON is a first of its kind in the regime of privacy-preserving neural network training.
It employs multiparty lattice-based cryptography to preserve the confidentiality of the training data, the model, and the evaluation data.
It trains a 3-layer neural network on the MNIST dataset with 784 features and 60K samples distributed among 10 parties in less than 2 hours.
arXiv Detail & Related papers (2020-09-01T11:06:31Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - Faster Secure Data Mining via Distributed Homomorphic Encryption [108.77460689459247]
Homomorphic Encryption (HE) is receiving more and more attention recently for its capability to do computations over the encrypted field.
We propose a novel general distributed HE-based data mining framework towards one step of solving the scaling problem.
We verify the efficiency and effectiveness of our new framework by testing over various data mining algorithms and benchmark data-sets.
arXiv Detail & Related papers (2020-06-17T18:14:30Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.