Privacy-Preserving Machine Learning: Methods, Challenges and Directions
- URL: http://arxiv.org/abs/2108.04417v1
- Date: Tue, 10 Aug 2021 02:58:31 GMT
- Title: Privacy-Preserving Machine Learning: Methods, Challenges and Directions
- Authors: Runhua Xu, Nathalie Baracaldo, James Joshi
- Abstract summary: Well-designed privacy-preserving machine learning (PPML) solutions have attracted increasing research interest from academia and industry.
This paper systematically reviews existing privacy-preserving approaches and proposes a PGU model to guide evaluation for various PPML solutions.
- Score: 4.711430413139393
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning (ML) is increasingly being adopted in a wide variety of
application domains. Usually, a well-performing ML model, especially, emerging
deep neural network model, relies on a large volume of training data and
high-powered computational resources. The need for a vast volume of available
data raises serious privacy concerns because of the risk of leakage of highly
privacy-sensitive information and the evolving regulatory environments that
increasingly restrict access to and use of privacy-sensitive data. Furthermore,
a trained ML model may also be vulnerable to adversarial attacks such as
membership/property inference attacks and model inversion attacks. Hence,
well-designed privacy-preserving ML (PPML) solutions are crucial and have
attracted increasing research interest from academia and industry. More and
more efforts of PPML are proposed via integrating privacy-preserving techniques
into ML algorithms, fusing privacy-preserving approaches into ML pipeline, or
designing various privacy-preserving architectures for existing ML systems. In
particular, existing PPML arts cross-cut ML, system, security, and privacy;
hence, there is a critical need to understand state-of-art studies, related
challenges, and a roadmap for future research. This paper systematically
reviews and summarizes existing privacy-preserving approaches and proposes a
PGU model to guide evaluation for various PPML solutions through elaborately
decomposing their privacy-preserving functionalities. The PGU model is designed
as the triad of Phase, Guarantee, and technical Utility. Furthermore, we also
discuss the unique characteristics and challenges of PPML and outline possible
directions of future work that benefit a wide range of research communities
among ML, distributed systems, security, and privacy areas.
Related papers
- LLM-PBE: Assessing Data Privacy in Large Language Models [111.58198436835036]
Large Language Models (LLMs) have become integral to numerous domains, significantly advancing applications in data management, mining, and analysis.
Despite the critical nature of this issue, there has been no existing literature to offer a comprehensive assessment of data privacy risks in LLMs.
Our paper introduces LLM-PBE, a toolkit crafted specifically for the systematic evaluation of data privacy risks in LLMs.
arXiv Detail & Related papers (2024-08-23T01:37:29Z) - A Quantization-based Technique for Privacy Preserving Distributed Learning [2.2139875218234475]
We describe a novel, regulation-compliant data protection technique for the distributed training of Machine Learning models.
Our method protects both training data and ML model parameters by employing a protocol based on a quantized multi-hash data representation Hash-Comb combined with randomization.
arXiv Detail & Related papers (2024-06-26T14:54:12Z) - State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey [0.0]
This paper examines the evolving landscape of machine learning (ML) and its profound impact across various sectors.
It focuses on the emerging field of Privacy-preserving Machine Learning (PPML)
As ML applications become increasingly integral to industries like telecommunications, financial technology, and surveillance, they raise significant privacy concerns.
arXiv Detail & Related papers (2024-02-25T17:31:06Z) - GuardML: Efficient Privacy-Preserving Machine Learning Services Through
Hybrid Homomorphic Encryption [2.611778281107039]
Privacy-Preserving Machine Learning (PPML) methods have been introduced to safeguard the privacy and security of Machine Learning models.
Modern cryptographic scheme, Hybrid Homomorphic Encryption (HHE) has recently emerged.
We develop and evaluate an HHE-based PPML application for classifying heart disease based on sensitive ECG data.
arXiv Detail & Related papers (2024-01-26T13:12:52Z) - Privacy in Large Language Models: Attacks, Defenses and Future Directions [84.73301039987128]
We analyze the current privacy attacks targeting large language models (LLMs) and categorize them according to the adversary's assumed capabilities.
We present a detailed overview of prominent defense strategies that have been developed to counter these privacy attacks.
arXiv Detail & Related papers (2023-10-16T13:23:54Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Machine Learning with Confidential Computing: A Systematization of Knowledge [9.632031075287047]
Privacy and security challenges in Machine Learning (ML) have become increasingly severe, along with ML's pervasive development and the recent demonstration of large attack surfaces.
As a mature system-oriented approach, Confidential Computing has been utilized in both academia and industry to mitigate privacy and security issues in various ML scenarios.
We systematize the prior work on Confidential Computing-assisted ML techniques that provide i) confidentiality guarantees and ii) integrity assurances, and discuss their advanced features and drawbacks.
arXiv Detail & Related papers (2022-08-22T08:23:53Z) - When Machine Learning Meets Spectrum Sharing Security: Methodologies and
Challenges [19.313414666640078]
The exponential growth of internet connected systems has generated numerous challenges, such as spectrum shortage issues.
Complicated and dynamic spectrum sharing (SS) systems can be exposed to different potential security and privacy issues.
Machine learning (ML) based methods have frequently been proposed to address those issues.
arXiv Detail & Related papers (2022-01-12T20:04:28Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.