Data Privacy with Homomorphic Encryption in Neural Networks Training and
Inference
- URL: http://arxiv.org/abs/2305.02225v1
- Date: Wed, 3 May 2023 16:05:26 GMT
- Title: Data Privacy with Homomorphic Encryption in Neural Networks Training and
Inference
- Authors: Ivone Amorim, Eva Maia, Pedro Barbosa, Isabel Pra\c{c}a
- Abstract summary: Homomorphic Encryption (HE) has the potential to be used as a solution to preserve data privacy in Neural Networks (NNs)
This study focuses on the techniques and strategies used to enhance data privacy and security.
HE has the potential to provide strong data privacy guarantees for NNs, but several challenges need to be addressed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of Neural Networks (NNs) for sensitive data processing is becoming
increasingly popular, raising concerns about data privacy and security.
Homomorphic Encryption (HE) has the potential to be used as a solution to
preserve data privacy in NN. This study provides a comprehensive analysis on
the use of HE for NN training and classification, focusing on the techniques
and strategies used to enhance data privacy and security. The current
state-of-the-art in HE for NNs is analysed, and the challenges and limitations
that need to be addressed to make it a reliable and efficient approach for
privacy preservation are identified. Also, the different categories of HE
schemes and their suitability for NNs are discussed, as well as the techniques
used to optimize the accuracy and efficiency of encrypted models. The review
reveals that HE has the potential to provide strong data privacy guarantees for
NNs, but several challenges need to be addressed, such as limited support for
advanced NN operations, scalability issues, and performance trade-offs.
Related papers
- Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Scalable and Efficient Methods for Uncertainty Estimation and Reduction
in Deep Learning [0.0]
This paper explores scalable and efficient methods for uncertainty estimation and reduction in deep learning.
We tackle the inherent uncertainties arising from out-of-distribution inputs and hardware non-idealities.
Our approach encompasses problem-aware training algorithms, novel NN topologies, and hardware co-design solutions.
arXiv Detail & Related papers (2024-01-13T19:30:34Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and
Applications [76.88662943995641]
Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data.
To address this issue, researchers have started to develop privacy-preserving GNNs.
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
arXiv Detail & Related papers (2023-08-31T00:31:08Z) - Scaling Model Checking for DNN Analysis via State-Space Reduction and
Input Segmentation (Extended Version) [12.272381003294026]
Existing frameworks provide robustness and/or safety guarantees for the trained NNs.
We proposed FANNet, the first model checking-based framework for analyzing a broader range of NN properties.
This work develops state-space reduction and input segmentation approaches, to improve the scalability and timing efficiency of formal NN analysis.
arXiv Detail & Related papers (2023-06-29T22:18:07Z) - Heterogeneous Randomized Response for Differential Privacy in Graph
Neural Networks [18.4005860362025]
Graph neural networks (GNNs) are susceptible to privacy inference attacks (PIAs)
We propose a novel mechanism to protect nodes' features and edges against PIAs under differential privacy (DP) guarantees.
We derive significantly better randomization probabilities and tighter error bounds at both levels of nodes' features and edges.
arXiv Detail & Related papers (2022-11-10T18:52:46Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Privacy-Preserving Decentralized Inference with Graph Neural Networks in
Wireless Networks [39.99126905067949]
We analyze and enhance the privacy of decentralized inference with graph neural networks in wireless networks.
Specifically, we adopt local differential privacy as the metric, and design novel privacy-preserving signals.
We also adopt the over-the-air technique and theoretically demonstrate its advantage in privacy preservation.
arXiv Detail & Related papers (2022-08-15T01:33:07Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.