HE-MAN -- Homomorphically Encrypted MAchine learning with oNnx models
- URL: http://arxiv.org/abs/2302.08260v1
- Date: Thu, 16 Feb 2023 12:37:14 GMT
- Title: HE-MAN -- Homomorphically Encrypted MAchine learning with oNnx models
- Authors: Martin Nocker, David Drexel, Michael Rader, Alessio Montuoro, Pascal
Sch\"ottle
- Abstract summary: homomorphic encryption (FHE) is a promising technique to enable individuals using ML services without giving up privacy.
We introduce HE-MAN, an open-source machine learning toolset for privacy preserving inference with ONNX models and homomorphically encrypted data.
Compared to prior work, HE-MAN supports a broad range of ML models in ONNX format out of the box without sacrificing accuracy.
- Score: 0.23624125155742057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) algorithms are increasingly important for the success
of products and services, especially considering the growing amount and
availability of data. This also holds for areas handling sensitive data, e.g.
applications processing medical data or facial images. However, people are
reluctant to pass their personal sensitive data to a ML service provider. At
the same time, service providers have a strong interest in protecting their
intellectual property and therefore refrain from publicly sharing their ML
model. Fully homomorphic encryption (FHE) is a promising technique to enable
individuals using ML services without giving up privacy and protecting the ML
model of service providers at the same time. Despite steady improvements, FHE
is still hardly integrated in today's ML applications.
We introduce HE-MAN, an open-source two-party machine learning toolset for
privacy preserving inference with ONNX models and homomorphically encrypted
data. Both the model and the input data do not have to be disclosed. HE-MAN
abstracts cryptographic details away from the users, thus expertise in FHE is
not required for either party. HE-MAN 's security relies on its underlying FHE
schemes. For now, we integrate two different homomorphic encryption schemes,
namely Concrete and TenSEAL. Compared to prior work, HE-MAN supports a broad
range of ML models in ONNX format out of the box without sacrificing accuracy.
We evaluate the performance of our implementation on different network
architectures classifying handwritten digits and performing face recognition
and report accuracy and latency of the homomorphically encrypted inference.
Cryptographic parameters are automatically derived by the tools. We show that
the accuracy of HE-MAN is on par with models using plaintext input while
inference latency is several orders of magnitude higher compared to the
plaintext case.
Related papers
- KnowledgeSG: Privacy-Preserving Synthetic Text Generation with Knowledge Distillation from Server [48.04903443425111]
Large language models (LLMs) facilitate many parties to fine-tune LLMs on their own private data.
Existing solutions, such as utilizing synthetic data for substitution, struggle to simultaneously improve performance and preserve privacy.
We propose KnowledgeSG, a novel client-server framework which enhances synthetic data quality and improves model performance while ensuring privacy.
arXiv Detail & Related papers (2024-10-08T06:42:28Z) - GuardML: Efficient Privacy-Preserving Machine Learning Services Through
Hybrid Homomorphic Encryption [2.611778281107039]
Privacy-Preserving Machine Learning (PPML) methods have been introduced to safeguard the privacy and security of Machine Learning models.
Modern cryptographic scheme, Hybrid Homomorphic Encryption (HHE) has recently emerged.
We develop and evaluate an HHE-based PPML application for classifying heart disease based on sensitive ECG data.
arXiv Detail & Related papers (2024-01-26T13:12:52Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - SABLE: Secure And Byzantine robust LEarning [9.455980760111498]
Homomorphic encryption (HE) has emerged as a leading security measure to preserve privacy in distributed learning.
This paper introduces SABLE, the first homomorphic and Byzantine robust distributed learning algorithm.
arXiv Detail & Related papers (2023-09-11T11:54:42Z) - Robust Representation Learning for Privacy-Preserving Machine Learning:
A Multi-Objective Autoencoder Approach [0.9831489366502302]
We propose a robust representation learning framework for privacy-preserving machine learning (ppML)
Our method centers on training autoencoders in a multi-objective manner and then concatenating the latent and learned features from the encoding part as the encoded form of our data.
With our proposed framework, we can share our data and use third party tools without being under the threat of revealing its original form.
arXiv Detail & Related papers (2023-09-08T16:41:25Z) - PEOPL: Characterizing Privately Encoded Open Datasets with Public Labels [59.66777287810985]
We introduce information-theoretic scores for privacy and utility, which quantify the average performance of an unfaithful user.
We then theoretically characterize primitives in building families of encoding schemes that motivate the use of random deep neural networks.
arXiv Detail & Related papers (2023-03-31T18:03:53Z) - Effect of Homomorphic Encryption on the Performance of Training
Federated Learning Generative Adversarial Networks [10.030986278376567]
A Generative Adversarial Network (GAN) is a deep-learning generative model in the field of Machine Learning (ML)
In certain fields, such as medicine, the training data may be hospital patient records that are stored across different hospitals.
This paper will focus on the performance loss of training an FL-GAN with three different types of Homomorphic Encryption.
arXiv Detail & Related papers (2022-07-01T08:35:10Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - Privacy-Preserving Wavelet Wavelet Neural Network with Fully Homomorphic
Encryption [5.010425616264462]
Privacy-Preserving Machine Learning (PPML) aims to protect the privacy and provide security to the data used in building Machine Learning models.
We propose a fully homomorphic encrypted wavelet neural network to protect privacy and at the same time not compromise on the efficiency of the model.
arXiv Detail & Related papers (2022-05-26T10:40:31Z) - Just Fine-tune Twice: Selective Differential Privacy for Large Language
Models [69.66654761324702]
We propose a simple yet effective just-fine-tune-twice privacy mechanism to achieve SDP for large Transformer-based language models.
Experiments show that our models achieve strong performance while staying robust to the canary insertion attack.
arXiv Detail & Related papers (2022-04-15T22:36:55Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.