SoK: Privacy-preserving Deep Learning with Homomorphic Encryption
- URL: http://arxiv.org/abs/2112.12855v1
- Date: Thu, 23 Dec 2021 22:03:27 GMT
- Title: SoK: Privacy-preserving Deep Learning with Homomorphic Encryption
- Authors: Robert Podschwadt, Daniel Takabi, Peizhao Hu
- Abstract summary: homomorphic encryption (HE) can be performed on encrypted data without revealing its content.
We take an in-depth look at approaches that combine neural networks with HE for privacy preservation.
We find numerous challenges to HE based privacy-preserving deep learning such as computational overhead, usability, and limitations posed by the encryption schemes.
- Score: 2.9069679115858755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Outsourced computation for neural networks allows users access to state of
the art models without needing to invest in specialized hardware and know-how.
The problem is that the users lose control over potentially privacy sensitive
data. With homomorphic encryption (HE) computation can be performed on
encrypted data without revealing its content. In this systematization of
knowledge, we take an in-depth look at approaches that combine neural networks
with HE for privacy preservation. We categorize the changes to neural network
models and architectures to make them computable over HE and how these changes
impact performance. We find numerous challenges to HE based privacy-preserving
deep learning such as computational overhead, usability, and limitations posed
by the encryption schemes.
Related papers
- Federated Learning with Quantum Computing and Fully Homomorphic Encryption: A Novel Computing Paradigm Shift in Privacy-Preserving ML [4.92218040320554]
Federated Learning is a privacy-preserving alternative to conventional methods that allow multiple learning clients to share model knowledge without disclosing private data.
This work applies the Fully Homomorphic Encryption scheme to a Federated Learning Neural Network architecture that integrates both classical and quantum layers.
arXiv Detail & Related papers (2024-09-14T01:23:26Z) - Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation [59.302770084115814]
We introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene.
We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images.
arXiv Detail & Related papers (2023-10-04T19:35:56Z) - Learning in the Dark: Privacy-Preserving Machine Learning using Function Approximation [1.8907108368038215]
Learning in the Dark is a privacy-preserving machine learning model that can classify encrypted images with high accuracy.
It is capable of performing high accuracy predictions by performing computations directly on encrypted data.
arXiv Detail & Related papers (2023-09-15T06:45:58Z) - Deep Neural Networks for Encrypted Inference with TFHE [0.0]
Fully homomorphic encryption (FHE) is an encryption method that allows to perform computation on encrypted data, without decryption.
TFHE preserves the privacy of the users of online services that handle sensitive data, such as health data, biometrics, credit scores and other personal information.
We show how to construct Deep Neural Networks (DNNs) that are compatible with the constraints of TFHE, an FHE scheme that allows arbitrary depth computation circuits.
arXiv Detail & Related papers (2023-02-13T09:53:31Z) - EDLaaS; Fully Homomorphic Encryption Over Neural Network Graphs [7.195443855063635]
We use the 4th generation Cheon, Kim, Kim and Song (CKKS) FHE scheme over fixed points provided by the Microsoft Simple Encrypted Arithmetic Library (MS-SEAL)
We find that FHE is not a panacea for all privacy preserving machine learning (PPML) problems, and that certain limitations still remain, such as model training.
We focus on convolutional neural networks (CNNs), fashion-MNIST, and levelled FHE operations.
arXiv Detail & Related papers (2021-10-26T12:43:35Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - NeuraCrypt: Hiding Private Health Data via Random Neural Networks for
Public Training [64.54200987493573]
We propose NeuraCrypt, a private encoding scheme based on random deep neural networks.
NeuraCrypt encodes raw patient data using a randomly constructed neural network known only to the data-owner.
We show that NeuraCrypt achieves competitive accuracy to non-private baselines on a variety of x-ray tasks.
arXiv Detail & Related papers (2021-06-04T13:42:21Z) - TenSEAL: A Library for Encrypted Tensor Operations Using Homomorphic
Encryption [0.0]
We present TenSEAL, an open-source library for Privacy-Preserving Machine Learning using Homomorphic Encryption.
We show that an encrypted convolutional neural network can be evaluated in less than a second, using less than half a megabyte of communication.
arXiv Detail & Related papers (2021-04-07T14:32:38Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.