SoK: Privacy Preserving Machine Learning using Functional Encryption:
Opportunities and Challenges
- URL: http://arxiv.org/abs/2204.05136v1
- Date: Mon, 11 Apr 2022 14:15:36 GMT
- Title: SoK: Privacy Preserving Machine Learning using Functional Encryption:
Opportunities and Challenges
- Authors: Prajwal Panzade and Daniel Takabi
- Abstract summary: We focus on Inner-product-FE and Quadratic-FE-based machine learning models for the privacy-preserving machine learning (PPML) applications.
To the best of our knowledge, this is the first work to systematize FE-based PPML approaches.
- Score: 1.2183405753834562
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the advent of functional encryption, new possibilities for computation
on encrypted data have arisen. Functional Encryption enables data owners to
grant third-party access to perform specified computations without disclosing
their inputs. It also provides computation results in plain, unlike Fully
Homomorphic Encryption. The ubiquitousness of machine learning has led to the
collection of massive private data in the cloud computing environment. This
raises potential privacy issues and the need for more private and secure
computing solutions. Numerous efforts have been made in privacy-preserving
machine learning (PPML) to address security and privacy concerns. There are
approaches based on fully homomorphic encryption (FHE), secure multiparty
computation (SMC), and, more recently, functional encryption (FE). However,
FE-based PPML is still in its infancy and has not yet gotten much attention
compared to FHE-based PPML approaches. In this paper, we provide a
systematization of PPML works based on FE summarizing state-of-the-art in the
literature. We focus on Inner-product-FE and Quadratic-FE-based machine
learning models for the PPML applications. We analyze the performance and
usability of the available FE libraries and their applications to PPML. We also
discuss potential directions for FE-based PPML approaches. To the best of our
knowledge, this is the first work to systematize FE-based PPML approaches.
Related papers
- The Communication-Friendly Privacy-Preserving Machine Learning against Malicious Adversaries [14.232901861974819]
Privacy-preserving machine learning (PPML) is an innovative approach that allows for secure data analysis while safeguarding sensitive information.
We introduce efficient protocol for secure linear function evaluation.
We extend the protocol to handle linear and non-linear layers, ensuring compatibility with a wide range of machine-learning models.
arXiv Detail & Related papers (2024-11-14T08:55:14Z) - A Pervasive, Efficient and Private Future: Realizing Privacy-Preserving Machine Learning Through Hybrid Homomorphic Encryption [2.434439232485276]
Privacy-Preserving Machine Learning (PPML) methods have been proposed to mitigate the privacy and security risks of ML models.
Modern encryption scheme that combines symmetric cryptography with HE has been introduced to overcome these challenges.
This work introduces HHE to the ML field by proposing resource-friendly PPML protocols for edge devices.
arXiv Detail & Related papers (2024-09-10T11:04:14Z) - Wildest Dreams: Reproducible Research in Privacy-preserving Neural
Network Training [2.853180143237022]
This work focuses on the ML model's training phase, where maintaining user data privacy is of utmost importance.
We provide a solid theoretical background that eases the understanding of current approaches.
We reproduce results for some of the papers and examine at what level existing works in the field provide support for open science.
arXiv Detail & Related papers (2024-03-06T10:25:36Z) - GuardML: Efficient Privacy-Preserving Machine Learning Services Through
Hybrid Homomorphic Encryption [2.611778281107039]
Privacy-Preserving Machine Learning (PPML) methods have been introduced to safeguard the privacy and security of Machine Learning models.
Modern cryptographic scheme, Hybrid Homomorphic Encryption (HHE) has recently emerged.
We develop and evaluate an HHE-based PPML application for classifying heart disease based on sensitive ECG data.
arXiv Detail & Related papers (2024-01-26T13:12:52Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Differentially Private Deep Q-Learning for Pattern Privacy Preservation
in MEC Offloading [76.0572817182483]
attackers may eavesdrop on the offloading decisions to infer the edge server's (ES's) queue information and users' usage patterns.
We propose an offloading strategy which jointly minimizes the latency, ES's energy consumption, and task dropping rate, while preserving pattern privacy (PP)
We develop a Differential Privacy Deep Q-learning based Offloading (DP-DQO) algorithm to solve this problem while addressing the PP issue by injecting noise into the generated offloading decisions.
arXiv Detail & Related papers (2023-02-09T12:50:18Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - Privacy-Preserving Machine Learning: Methods, Challenges and Directions [4.711430413139393]
Well-designed privacy-preserving machine learning (PPML) solutions have attracted increasing research interest from academia and industry.
This paper systematically reviews existing privacy-preserving approaches and proposes a PGU model to guide evaluation for various PPML solutions.
arXiv Detail & Related papers (2021-08-10T02:58:31Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.