GuardML: Efficient Privacy-Preserving Machine Learning Services Through
Hybrid Homomorphic Encryption
- URL: http://arxiv.org/abs/2401.14840v1
- Date: Fri, 26 Jan 2024 13:12:52 GMT
- Title: GuardML: Efficient Privacy-Preserving Machine Learning Services Through
Hybrid Homomorphic Encryption
- Authors: Eugene Frimpong, Khoa Nguyen, Mindaugas Budzys, Tanveer Khan, Antonis
Michalas
- Abstract summary: Privacy-Preserving Machine Learning (PPML) methods have been introduced to safeguard the privacy and security of Machine Learning models.
Modern cryptographic scheme, Hybrid Homomorphic Encryption (HHE) has recently emerged.
We develop and evaluate an HHE-based PPML application for classifying heart disease based on sensitive ECG data.
- Score: 2.611778281107039
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning (ML) has emerged as one of data science's most
transformative and influential domains. However, the widespread adoption of ML
introduces privacy-related concerns owing to the increasing number of malicious
attacks targeting ML models. To address these concerns, Privacy-Preserving
Machine Learning (PPML) methods have been introduced to safeguard the privacy
and security of ML models. One such approach is the use of Homomorphic
Encryption (HE). However, the significant drawbacks and inefficiencies of
traditional HE render it impractical for highly scalable scenarios.
Fortunately, a modern cryptographic scheme, Hybrid Homomorphic Encryption
(HHE), has recently emerged, combining the strengths of symmetric cryptography
and HE to surmount these challenges. Our work seeks to introduce HHE to ML by
designing a PPML scheme tailored for end devices. We leverage HHE as the
fundamental building block to enable secure learning of classification outcomes
over encrypted data, all while preserving the privacy of the input data and ML
model. We demonstrate the real-world applicability of our construction by
developing and evaluating an HHE-based PPML application for classifying heart
disease based on sensitive ECG data. Notably, our evaluations revealed a slight
reduction in accuracy compared to inference on plaintext data. Additionally,
both the analyst and end devices experience minimal communication and
computation costs, underscoring the practical viability of our approach. The
successful integration of HHE into PPML provides a glimpse into a more secure
and privacy-conscious future for machine learning on relatively constrained end
devices.
Related papers
- CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration [90.36429361299807]
multimodal large language models (MLLMs) have demonstrated remarkable success in engaging in conversations involving visual inputs.
The integration of visual modality has introduced a unique vulnerability: the MLLM becomes susceptible to malicious visual inputs.
We introduce a technique termed CoCA, which amplifies the safety-awareness of the MLLM by calibrating its output distribution.
arXiv Detail & Related papers (2024-09-17T17:14:41Z) - A Pervasive, Efficient and Private Future: Realizing Privacy-Preserving Machine Learning Through Hybrid Homomorphic Encryption [2.434439232485276]
Privacy-Preserving Machine Learning (PPML) methods have been proposed to mitigate the privacy and security risks of ML models.
Modern encryption scheme that combines symmetric cryptography with HE has been introduced to overcome these challenges.
This work introduces HHE to the ML field by proposing resource-friendly PPML protocols for edge devices.
arXiv Detail & Related papers (2024-09-10T11:04:14Z) - A Quantization-based Technique for Privacy Preserving Distributed Learning [2.2139875218234475]
We describe a novel, regulation-compliant data protection technique for the distributed training of Machine Learning models.
Our method protects both training data and ML model parameters by employing a protocol based on a quantized multi-hash data representation Hash-Comb combined with randomization.
arXiv Detail & Related papers (2024-06-26T14:54:12Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - HE-MAN -- Homomorphically Encrypted MAchine learning with oNnx models [0.23624125155742057]
homomorphic encryption (FHE) is a promising technique to enable individuals using ML services without giving up privacy.
We introduce HE-MAN, an open-source machine learning toolset for privacy preserving inference with ONNX models and homomorphically encrypted data.
Compared to prior work, HE-MAN supports a broad range of ML models in ONNX format out of the box without sacrificing accuracy.
arXiv Detail & Related papers (2023-02-16T12:37:14Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - SoK: Privacy Preserving Machine Learning using Functional Encryption:
Opportunities and Challenges [1.2183405753834562]
We focus on Inner-product-FE and Quadratic-FE-based machine learning models for the privacy-preserving machine learning (PPML) applications.
To the best of our knowledge, this is the first work to systematize FE-based PPML approaches.
arXiv Detail & Related papers (2022-04-11T14:15:36Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Privacy-Preserving Machine Learning: Methods, Challenges and Directions [4.711430413139393]
Well-designed privacy-preserving machine learning (PPML) solutions have attracted increasing research interest from academia and industry.
This paper systematically reviews existing privacy-preserving approaches and proposes a PGU model to guide evaluation for various PPML solutions.
arXiv Detail & Related papers (2021-08-10T02:58:31Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.