MPC-enabled Privacy-Preserving Neural Network Training against Malicious
Attack
- URL: http://arxiv.org/abs/2007.12557v3
- Date: Wed, 10 Feb 2021 05:51:53 GMT
- Title: MPC-enabled Privacy-Preserving Neural Network Training against Malicious
Attack
- Authors: Ziyao Liu, Ivan Tjuawinata, Chaoping Xing, Kwok-Yan Lam
- Abstract summary: We propose an approach for constructing efficient $n$-party protocols for secure neural network training.
Our actively secure neural network training incurs affordable efficiency overheads of around 2X and 2.7X in LAN and WAN settings.
Besides, we propose a scheme to allow additive shares defined over an integer ring $mathbbZ_N$ to be securely converted to additive shares over a finite field $mathbbZ_Q$.
- Score: 44.50542274828587
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The application of secure multiparty computation (MPC) in machine learning,
especially privacy-preserving neural network training, has attracted tremendous
attention from the research community in recent years. MPC enables several data
owners to jointly train a neural network while preserving the data privacy of
each participant. However, most of the previous works focus on semi-honest
threat model that cannot withstand fraudulent messages sent by malicious
participants. In this paper, we propose an approach for constructing efficient
$n$-party protocols for secure neural network training that can provide
security for all honest participants even when a majority of the parties are
malicious. Compared to the other designs that provide semi-honest security in a
dishonest majority setting, our actively secure neural network training incurs
affordable efficiency overheads of around 2X and 2.7X in LAN and WAN settings,
respectively. Besides, we propose a scheme to allow additive shares defined
over an integer ring $\mathbb{Z}_N$ to be securely converted to additive shares
over a finite field $\mathbb{Z}_Q$, which may be of independent interest. Such
conversion scheme is essential in securely and correctly converting shared
Beaver triples defined over an integer ring generated in the preprocessing
phase to triples defined over a field to be used in the calculation in the
online phase.
Related papers
- Edge-Only Universal Adversarial Attacks in Distributed Learning [49.546479320670464]
In this work, we explore the feasibility of generating universal adversarial attacks when an attacker has access to the edge part of the model only.
Our approach shows that adversaries can induce effective mispredictions in the unknown cloud part by leveraging key features on the edge side.
Our results on ImageNet demonstrate strong attack transferability to the unknown cloud part.
arXiv Detail & Related papers (2024-11-15T11:06:24Z) - The Communication-Friendly Privacy-Preserving Machine Learning against Malicious Adversaries [14.232901861974819]
Privacy-preserving machine learning (PPML) is an innovative approach that allows for secure data analysis while safeguarding sensitive information.
We introduce efficient protocol for secure linear function evaluation.
We extend the protocol to handle linear and non-linear layers, ensuring compatibility with a wide range of machine-learning models.
arXiv Detail & Related papers (2024-11-14T08:55:14Z) - Secure Deep Learning-based Distributed Intelligence on Pocket-sized
Drones [75.80952211739185]
Palm-sized nano-drones are an appealing class of edge nodes, but their limited computational resources prevent running large deep-learning models onboard.
Adopting an edge-fog computational paradigm, we can offload part of the computation to the fog; however, this poses security concerns if the fog node, or the communication link, can not be trusted.
We propose a novel distributed edge-fog execution scheme that validates fog computation by redundantly executing a random subnetwork aboard our nano-drone.
arXiv Detail & Related papers (2023-07-04T08:29:41Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Deploying Convolutional Networks on Untrusted Platforms Using 2D
Holographic Reduced Representations [33.26156710843837]
We create a neural network with a pseudo-encryption style defense that empirically shows robustness to attack.
By leveraging Holographic Symbolic Reduced Representations (HRR), we create a neural network with a pseudo-encryption style defense that empirically shows robustness to attack.
arXiv Detail & Related papers (2022-06-13T03:31:39Z) - Robustness Certificates for Implicit Neural Networks: A Mixed Monotone
Contractive Approach [60.67748036747221]
Implicit neural networks offer competitive performance and reduced memory consumption.
They can remain brittle with respect to input adversarial perturbations.
This paper proposes a theoretical and computational framework for robustness verification of implicit neural networks.
arXiv Detail & Related papers (2021-12-10T03:08:55Z) - PRICURE: Privacy-Preserving Collaborative Inference in a Multi-Party
Setting [3.822543555265593]
This paper presents PRICURE, a system that combines complementary strengths of secure multi-party computation and differential privacy.
PRICURE enables privacy-preserving collaborative prediction among multiple model owners.
We evaluate PRICURE on neural networks across four datasets including benchmark medical image classification datasets.
arXiv Detail & Related papers (2021-02-19T05:55:53Z) - POSEIDON: Privacy-Preserving Federated Neural Network Learning [8.103262600715864]
POSEIDON is a first of its kind in the regime of privacy-preserving neural network training.
It employs multiparty lattice-based cryptography to preserve the confidentiality of the training data, the model, and the evaluation data.
It trains a 3-layer neural network on the MNIST dataset with 784 features and 60K samples distributed among 10 parties in less than 2 hours.
arXiv Detail & Related papers (2020-09-01T11:06:31Z) - Industrial Scale Privacy Preserving Deep Neural Network [23.690146141150407]
We propose an industrial scale privacy preserving neural network learning paradigm, which is secure against semi-honest adversaries.
We conduct experiments on real-world fraud detection dataset and financial distress prediction dataset.
arXiv Detail & Related papers (2020-03-11T10:15:37Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.