Industrial Scale Privacy Preserving Deep Neural Network
- URL: http://arxiv.org/abs/2003.05198v2
- Date: Thu, 12 Mar 2020 05:42:35 GMT
- Title: Industrial Scale Privacy Preserving Deep Neural Network
- Authors: Longfei Zheng, Chaochao Chen, Yingting Liu, Bingzhe Wu, Xibin Wu, Li
Wang, Lei Wang, Jun Zhou, Shuang Yang
- Abstract summary: We propose an industrial scale privacy preserving neural network learning paradigm, which is secure against semi-honest adversaries.
We conduct experiments on real-world fraud detection dataset and financial distress prediction dataset.
- Score: 23.690146141150407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Network (DNN) has been showing great potential in kinds of
real-world applications such as fraud detection and distress prediction.
Meanwhile, data isolation has become a serious problem currently, i.e.,
different parties cannot share data with each other. To solve this issue, most
research leverages cryptographic techniques to train secure DNN models for
multi-parties without compromising their private data. Although such methods
have strong security guarantee, they are difficult to scale to deep networks
and large datasets due to its high communication and computation complexities.
To solve the scalability of the existing secure Deep Neural Network (DNN) in
data isolation scenarios, in this paper, we propose an industrial scale privacy
preserving neural network learning paradigm, which is secure against
semi-honest adversaries. Our main idea is to split the computation graph of DNN
into two parts, i.e., the computations related to private data are performed by
each party using cryptographic techniques, and the rest computations are done
by a neutral server with high computation ability. We also present a defender
mechanism for further privacy protection. We conduct experiments on real-world
fraud detection dataset and financial distress prediction dataset, the
encouraging results demonstrate the practicalness of our proposal.
Related papers
- Deep Neural Networks for Encrypted Inference with TFHE [0.0]
Fully homomorphic encryption (FHE) is an encryption method that allows to perform computation on encrypted data, without decryption.
TFHE preserves the privacy of the users of online services that handle sensitive data, such as health data, biometrics, credit scores and other personal information.
We show how to construct Deep Neural Networks (DNNs) that are compatible with the constraints of TFHE, an FHE scheme that allows arbitrary depth computation circuits.
arXiv Detail & Related papers (2023-02-13T09:53:31Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - NN-EMD: Efficiently Training Neural Networks using Encrypted
Multi-Sourced Datasets [7.067870969078555]
Training a machine learning model over an encrypted dataset is an existing promising approach to address the privacy-preserving machine learning task.
We propose a novel framework, NN-EMD, to train a deep neural network (DNN) model over multiple datasets collected from multiple sources.
We evaluate our framework for performance with regards to the training time and model accuracy on the MNIST datasets.
arXiv Detail & Related papers (2020-12-18T23:01:20Z) - Towards Scalable and Privacy-Preserving Deep Neural Network via
Algorithmic-Cryptographic Co-design [28.789702559193675]
We propose SPNN - a Scalable and Privacy-preserving deep Neural Network learning framework.
From cryptographic perspective, we propose using two types of cryptographic techniques, i.e., secret sharing and homomorphic encryption.
Experimental results conducted on real-world datasets demonstrate the superiority of SPNN.
arXiv Detail & Related papers (2020-12-17T02:26:16Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.