Efficient CNN Building Blocks for Encrypted Data
- URL: http://arxiv.org/abs/2102.00319v1
- Date: Sat, 30 Jan 2021 21:47:23 GMT
- Title: Efficient CNN Building Blocks for Encrypted Data
- Authors: Nayna Jain, Karthik Nandakumar, Nalini Ratha, Sharath Pankanti, Uttam
Kumar
- Abstract summary: Homomorphic Encryption (FHE) is a promising technique to enable machine learning and inferencing.
We show that operational parameters of the chosen FHE scheme have a major impact on the design of the machine learning model.
Our empirical study shows that choice of aforementioned design parameters result in significant trade-offs between accuracy, security level, and computational time.
- Score: 6.955451042536852
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning on encrypted data can address the concerns related to
privacy and legality of sharing sensitive data with untrustworthy service
providers. Fully Homomorphic Encryption (FHE) is a promising technique to
enable machine learning and inferencing while providing strict guarantees
against information leakage. Since deep convolutional neural networks (CNNs)
have become the machine learning tool of choice in several applications,
several attempts have been made to harness CNNs to extract insights from
encrypted data. However, existing works focus only on ensuring data security
and ignore security of model parameters. They also report high level
implementations without providing rigorous analysis of the accuracy, security,
and speed trade-offs involved in the FHE implementation of generic primitive
operators of a CNN such as convolution, non-linear activation, and pooling. In
this work, we consider a Machine Learning as a Service (MLaaS) scenario where
both input data and model parameters are secured using FHE. Using the CKKS
scheme available in the open-source HElib library, we show that operational
parameters of the chosen FHE scheme such as the degree of the cyclotomic
polynomial, depth limitations of the underlying leveled HE scheme, and the
computational precision parameters have a major impact on the design of the
machine learning model (especially, the choice of the activation function and
pooling method). Our empirical study shows that choice of aforementioned design
parameters result in significant trade-offs between accuracy, security level,
and computational time. Encrypted inference experiments on the MNIST dataset
indicate that other design choices such as ciphertext packing strategy and
parallelization using multithreading are also critical in determining the
throughput and latency of the inference process.
Related papers
- Empowering HWNs with Efficient Data Labeling: A Clustered Federated
Semi-Supervised Learning Approach [2.046985601687158]
Clustered Federated Multitask Learning (CFL) has gained considerable attention as an effective strategy for overcoming statistical challenges.
We introduce a novel framework, Clustered Federated Semi-Supervised Learning (CFSL), designed for more realistic HWN scenarios.
Our results demonstrate that CFSL significantly improves upon key metrics such as testing accuracy, labeling accuracy, and labeling latency under varying proportions of labeled and unlabeled data.
arXiv Detail & Related papers (2024-01-19T11:47:49Z) - Robust Representation Learning for Privacy-Preserving Machine Learning:
A Multi-Objective Autoencoder Approach [0.9831489366502302]
We propose a robust representation learning framework for privacy-preserving machine learning (ppML)
Our method centers on training autoencoders in a multi-objective manner and then concatenating the latent and learned features from the encoding part as the encoded form of our data.
With our proposed framework, we can share our data and use third party tools without being under the threat of revealing its original form.
arXiv Detail & Related papers (2023-09-08T16:41:25Z) - PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels [59.66777287810985]
We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user.
We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks.
arXiv Detail & Related papers (2023-03-31T18:03:53Z) - Deep Neural Networks for Encrypted Inference with TFHE [0.0]
Fully homomorphic encryption (FHE) is an encryption method that allows to perform computation on encrypted data, without decryption.
TFHE preserves the privacy of the users of online services that handle sensitive data, such as health data, biometrics, credit scores and other personal information.
We show how to construct Deep Neural Networks (DNNs) that are compatible with the constraints of TFHE, an FHE scheme that allows arbitrary depth computation circuits.
arXiv Detail & Related papers (2023-02-13T09:53:31Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Secure Neuroimaging Analysis using Federated Learning with Homomorphic
Encryption [14.269757725951882]
Federated learning (FL) enables distributed computation of machine learning models over disparate, remote data sources.
Recent membership attacks show that private or sensitive personal data can sometimes be leaked or inferred when model parameters or summary statistics are shared with a central site.
We propose a framework for secure FL using fully-homomorphic encryption (FHE)
arXiv Detail & Related papers (2021-08-07T12:15:52Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z) - A Privacy-Preserving Distributed Architecture for
Deep-Learning-as-a-Service [68.84245063902908]
This paper introduces a novel distributed architecture for deep-learning-as-a-service.
It is able to preserve the user sensitive data while providing Cloud-based machine and deep learning services.
arXiv Detail & Related papers (2020-03-30T15:12:03Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.