Confidential Machine Learning on Untrusted Platforms: A Survey
- URL: http://arxiv.org/abs/2012.08156v1
- Date: Tue, 15 Dec 2020 08:57:02 GMT
- Title: Confidential Machine Learning on Untrusted Platforms: A Survey
- Authors: Sagar Sharma, Keke Chen
- Abstract summary: We will focus on the cryptographic approaches for confidential machine learning (CML)
We will also cover other directions such as perturbation-based approaches and CML in the hardware-assisted confidential computing environment.
The discussion will take a holistic way to consider a rich context of the related threat models, security assumptions, attacks, design philosophies, and associated trade-offs amongst data utility, cost, and confidentiality.
- Score: 10.45742327204133
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With ever-growing data and the need for developing powerful machine learning
models, data owners increasingly depend on untrusted platforms (e.g., public
clouds, edges, and machine learning service providers). However, sensitive data
and models become susceptible to unauthorized access, misuse, and privacy
compromises. Recently, a body of research has been developed to train machine
learning models on encrypted outsourced data with untrusted platforms. In this
survey, we summarize the studies in this emerging area with a unified framework
to highlight the major challenges and approaches. We will focus on the
cryptographic approaches for confidential machine learning (CML), while also
covering other directions such as perturbation-based approaches and CML in the
hardware-assisted confidential computing environment. The discussion will take
a holistic way to consider a rich context of the related threat models,
security assumptions, attacks, design philosophies, and associated trade-offs
amongst data utility, cost, and confidentiality.
Related papers
- Survey of Security and Data Attacks on Machine Unlearning In Financial and E-Commerce [0.0]
This paper surveys the landscape of security and data attacks on machine unlearning, with a focus on financial and e-commerce applications.
To mitigate these risks, various defense strategies are examined, including differential privacy, robust cryptographic guarantees, and Zero-Knowledge Proofs (ZKPs)
This survey highlights the need for continued research and innovation in secure machine unlearning, as well as the importance of developing strong defenses against evolving attack vectors.
arXiv Detail & Related papers (2024-09-29T00:30:36Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - A Survey on Blockchain-Based Federated Learning and Data Privacy [1.0499611180329802]
Federated learning is a decentralized machine learning paradigm that allows multiple clients to collaborate by leveraging local computational power and the models transmission.
On the other hand, federated learning has the drawback of data leakage due to the lack of privacy-preserving mechanisms employed during storage, transfer, and sharing.
This survey aims to compare the performance and security of various data privacy mechanisms adopted in blockchain-based federated learning architectures.
arXiv Detail & Related papers (2023-06-29T23:43:25Z) - A Survey on Differential Privacy with Machine Learning and Future
Outlook [0.0]
differential privacy is used to protect machine learning models from any attacks and vulnerabilities.
This survey paper presents different differentially private machine learning algorithms categorized into two main categories.
arXiv Detail & Related papers (2022-11-19T14:20:53Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Reliability Check via Weight Similarity in Privacy-Preserving
Multi-Party Machine Learning [7.552100672006174]
We focus on addressing the concerns of data privacy, model privacy, and data quality associated with multi-party machine learning.
We present a scheme for privacy-preserving collaborative learning that checks the participants' data quality while guaranteeing data and model privacy.
arXiv Detail & Related papers (2021-01-14T08:55:42Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z) - An Overview of Privacy in Machine Learning [2.8935588665357077]
This document provides background information on relevant concepts around machine learning and privacy.
We discuss possible adversarial models and settings, cover a wide range of attacks that relate to private and/or sensitive information leakage.
arXiv Detail & Related papers (2020-05-18T13:05:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.