Confidential Machine Learning on Untrusted Platforms: A Survey
- URL: http://arxiv.org/abs/2012.08156v1
- Date: Tue, 15 Dec 2020 08:57:02 GMT
- Title: Confidential Machine Learning on Untrusted Platforms: A Survey
- Authors: Sagar Sharma, Keke Chen
- Abstract summary: We will focus on the cryptographic approaches for confidential machine learning (CML)
We will also cover other directions such as perturbation-based approaches and CML in the hardware-assisted confidential computing environment.
The discussion will take a holistic way to consider a rich context of the related threat models, security assumptions, attacks, design philosophies, and associated trade-offs amongst data utility, cost, and confidentiality.
- Score: 10.45742327204133
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With ever-growing data and the need for developing powerful machine learning
models, data owners increasingly depend on untrusted platforms (e.g., public
clouds, edges, and machine learning service providers). However, sensitive data
and models become susceptible to unauthorized access, misuse, and privacy
compromises. Recently, a body of research has been developed to train machine
learning models on encrypted outsourced data with untrusted platforms. In this
survey, we summarize the studies in this emerging area with a unified framework
to highlight the major challenges and approaches. We will focus on the
cryptographic approaches for confidential machine learning (CML), while also
covering other directions such as perturbation-based approaches and CML in the
hardware-assisted confidential computing environment. The discussion will take
a holistic way to consider a rich context of the related threat models,
security assumptions, attacks, design philosophies, and associated trade-offs
amongst data utility, cost, and confidentiality.
Related papers
- The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - A Survey on Blockchain-Based Federated Learning and Data Privacy [1.0499611180329802]
Federated learning is a decentralized machine learning paradigm that allows multiple clients to collaborate by leveraging local computational power and the models transmission.
On the other hand, federated learning has the drawback of data leakage due to the lack of privacy-preserving mechanisms employed during storage, transfer, and sharing.
This survey aims to compare the performance and security of various data privacy mechanisms adopted in blockchain-based federated learning architectures.
arXiv Detail & Related papers (2023-06-29T23:43:25Z) - White-box Inference Attacks against Centralized Machine Learning and
Federated Learning [0.0]
We evaluate the impact of different neural network layers, gradient, gradient norm, and fine-tuned models on member inference attack performance with prior knowledge.
The results show that the centralized machine learning model shows more serious member information leakage in all aspects.
arXiv Detail & Related papers (2022-12-15T07:07:19Z) - A Survey on Differential Privacy with Machine Learning and Future
Outlook [0.0]
differential privacy is used to protect machine learning models from any attacks and vulnerabilities.
This survey paper presents different differentially private machine learning algorithms categorized into two main categories.
arXiv Detail & Related papers (2022-11-19T14:20:53Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Reliability Check via Weight Similarity in Privacy-Preserving
Multi-Party Machine Learning [7.552100672006174]
We focus on addressing the concerns of data privacy, model privacy, and data quality associated with multi-party machine learning.
We present a scheme for privacy-preserving collaborative learning that checks the participants' data quality while guaranteeing data and model privacy.
arXiv Detail & Related papers (2021-01-14T08:55:42Z) - Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
and Defenses [150.64470864162556]
This work systematically categorizes and discusses a wide range of dataset vulnerabilities and exploits.
In addition to describing various poisoning and backdoor threat models and the relationships among them, we develop their unified taxonomy.
arXiv Detail & Related papers (2020-12-18T22:38:47Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z) - An Overview of Privacy in Machine Learning [2.8935588665357077]
This document provides background information on relevant concepts around machine learning and privacy.
We discuss possible adversarial models and settings, cover a wide range of attacks that relate to private and/or sensitive information leakage.
arXiv Detail & Related papers (2020-05-18T13:05:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.