POSEIDON: Privacy-Preserving Federated Neural Network Learning
- URL: http://arxiv.org/abs/2009.00349v3
- Date: Fri, 8 Jan 2021 13:27:01 GMT
- Title: POSEIDON: Privacy-Preserving Federated Neural Network Learning
- Authors: Sinem Sav, Apostolos Pyrgelis, Juan R. Troncoso-Pastoriza, David
Froelicher, Jean-Philippe Bossuat, Joao Sa Sousa, and Jean-Pierre Hubaux
- Abstract summary: POSEIDON is a first of its kind in the regime of privacy-preserving neural network training.
It employs multiparty lattice-based cryptography to preserve the confidentiality of the training data, the model, and the evaluation data.
It trains a 3-layer neural network on the MNIST dataset with 784 features and 60K samples distributed among 10 parties in less than 2 hours.
- Score: 8.103262600715864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we address the problem of privacy-preserving training and
evaluation of neural networks in an $N$-party, federated learning setting. We
propose a novel system, POSEIDON, the first of its kind in the regime of
privacy-preserving neural network training. It employs multiparty lattice-based
cryptography to preserve the confidentiality of the training data, the model,
and the evaluation data, under a passive-adversary model and collusions between
up to $N-1$ parties. To efficiently execute the secure backpropagation
algorithm for training neural networks, we provide a generic packing approach
that enables Single Instruction, Multiple Data (SIMD) operations on encrypted
data. We also introduce arbitrary linear transformations within the
cryptographic bootstrapping operation, optimizing the costly cryptographic
computations over the parties, and we define a constrained optimization problem
for choosing the cryptographic parameters. Our experimental results show that
POSEIDON achieves accuracy similar to centralized or decentralized non-private
approaches and that its computation and communication overhead scales linearly
with the number of parties. POSEIDON trains a 3-layer neural network on the
MNIST dataset with 784 features and 60K samples distributed among 10 parties in
less than 2 hours.
Related papers
- Neural Network Training on Encrypted Data with TFHE [0.0]
We present an approach to outsourcing of training neural networks while preserving data confidentiality from malicious parties.
We use fully homomorphic encryption to build a unified training approach that works on encrypted data and learns quantized neural network models.
arXiv Detail & Related papers (2024-01-29T13:07:08Z) - A Novel Neural Network-Based Federated Learning System for Imbalanced
and Non-IID Data [2.9642661320713555]
Most machine learning algorithms rely heavily on large amount of data which may be collected from various sources.
To combat this issue, researchers have introduced federated learning, where a prediction model is learnt by ensuring the privacy of data of clients data.
In this research, we propose a centralized, neural network-based federated learning system.
arXiv Detail & Related papers (2023-11-16T17:14:07Z) - Fast, Distribution-free Predictive Inference for Neural Networks with
Coverage Guarantees [25.798057062452443]
This paper introduces a novel, computationally-efficient algorithm for predictive inference (PI)
It requires no distributional assumptions on the data and can be computed faster than existing bootstrap-type methods for neural networks.
arXiv Detail & Related papers (2023-06-11T04:03:58Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - NeuraCrypt: Hiding Private Health Data via Random Neural Networks for
Public Training [64.54200987493573]
We propose NeuraCrypt, a private encoding scheme based on random deep neural networks.
NeuraCrypt encodes raw patient data using a randomly constructed neural network known only to the data-owner.
We show that NeuraCrypt achieves competitive accuracy to non-private baselines on a variety of x-ray tasks.
arXiv Detail & Related papers (2021-06-04T13:42:21Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - NN-EMD: Efficiently Training Neural Networks using Encrypted
Multi-Sourced Datasets [7.067870969078555]
Training a machine learning model over an encrypted dataset is an existing promising approach to address the privacy-preserving machine learning task.
We propose a novel framework, NN-EMD, to train a deep neural network (DNN) model over multiple datasets collected from multiple sources.
We evaluate our framework for performance with regards to the training time and model accuracy on the MNIST datasets.
arXiv Detail & Related papers (2020-12-18T23:01:20Z) - Towards Scalable and Privacy-Preserving Deep Neural Network via
Algorithmic-Cryptographic Co-design [28.789702559193675]
We propose SPNN - a Scalable and Privacy-preserving deep Neural Network learning framework.
From cryptographic perspective, we propose using two types of cryptographic techniques, i.e., secret sharing and homomorphic encryption.
Experimental results conducted on real-world datasets demonstrate the superiority of SPNN.
arXiv Detail & Related papers (2020-12-17T02:26:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.