A Review on Machine Unlearning
- URL: http://arxiv.org/abs/2411.11315v1
- Date: Mon, 18 Nov 2024 06:18:13 GMT
- Title: A Review on Machine Unlearning
- Authors: Haibo Zhang, Toru Nakamura, Takamasa Isohara, Kouichi Sakurai,
- Abstract summary: This paper provides an in-depth review of the security and privacy concerns in machine learning models.
First, we present how machine learning can use users' private data in daily life and the role that plays in this problem.
Then, we introduce the concept of machine unlearning by describing the security threats in machine learning models.
- Score: 3.1168315477643245
- License:
- Abstract: Recently, an increasing number of laws have governed the useability of users' privacy. For example, Article 17 of the General Data Protection Regulation (GDPR), the right to be forgotten, requires machine learning applications to remove a portion of data from a dataset and retrain it if the user makes such a request. Furthermore, from the security perspective, training data for machine learning models, i.e., data that may contain user privacy, should be effectively protected, including appropriate erasure. Therefore, researchers propose various privacy-preserving methods to deal with such issues as machine unlearning. This paper provides an in-depth review of the security and privacy concerns in machine learning models. First, we present how machine learning can use users' private data in daily life and the role that the GDPR plays in this problem. Then, we introduce the concept of machine unlearning by describing the security threats in machine learning models and how to protect users' privacy from being violated using machine learning platforms. As the core content of the paper, we introduce and analyze current machine unlearning approaches and several representative research results and discuss them in the context of the data lineage. Furthermore, we also discuss the future research challenges in this field.
Related papers
- Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks [42.3024294376025]
Machine unlearning is a research hotspot in the field of privacy protection.
Recent researchers have found potential privacy leakages of various of machine unlearning approaches.
We analyze privacy risks in various aspects, including definitions, implementation methods, and real-world applications.
arXiv Detail & Related papers (2024-06-10T11:31:04Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Privacy Side Channels in Machine Learning Systems [87.53240071195168]
We introduce privacy side channels: attacks that exploit system-level components to extract private information.
For example, we show that deduplicating training data before applying differentially-private training creates a side-channel that completely invalidates any provable privacy guarantees.
We further show that systems which block language models from regenerating training data can be exploited to exfiltrate private keys contained in the training set.
arXiv Detail & Related papers (2023-09-11T16:49:05Z) - A Survey on Differential Privacy with Machine Learning and Future
Outlook [0.0]
differential privacy is used to protect machine learning models from any attacks and vulnerabilities.
This survey paper presents different differentially private machine learning algorithms categorized into two main categories.
arXiv Detail & Related papers (2022-11-19T14:20:53Z) - Privacy-Preserving Machine Learning for Collaborative Data Sharing via
Auto-encoder Latent Space Embeddings [57.45332961252628]
Privacy-preserving machine learning in data-sharing processes is an ever-critical task.
This paper presents an innovative framework that uses Representation Learning via autoencoders to generate privacy-preserving embedded data.
arXiv Detail & Related papers (2022-11-10T17:36:58Z) - Certified Data Removal in Sum-Product Networks [78.27542864367821]
Deleting the collected data is often insufficient to guarantee data privacy.
UnlearnSPN is an algorithm that removes the influence of single data points from a trained sum-product network.
arXiv Detail & Related papers (2022-10-04T08:22:37Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - When Machine Learning Meets Privacy: A Survey and Outlook [22.958274878097683]
Privacy has emerged as a big concern in this machine learning-based artificial intelligence era.
The work on the preservation of privacy and machine learning (ML) is still in an infancy stage.
This paper surveys the state of the art in privacy issues and solutions for machine learning.
arXiv Detail & Related papers (2020-11-24T00:52:49Z) - ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the
Privacy Risks of Machine Learning [10.190911271176201]
Machine learning models pose an additional privacy risk to the data by indirectly revealing about it through the model predictions and parameters.
There is an immediate need for a tool that can quantify the privacy risk to data from models.
We present ML Privacy Meter, a tool that can quantify the privacy risk to data from models through state of the art membership inference attack techniques.
arXiv Detail & Related papers (2020-07-18T06:21:35Z) - An Overview of Privacy in Machine Learning [2.8935588665357077]
This document provides background information on relevant concepts around machine learning and privacy.
We discuss possible adversarial models and settings, cover a wide range of attacks that relate to private and/or sensitive information leakage.
arXiv Detail & Related papers (2020-05-18T13:05:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.