Exploring the Landscape of Machine Unlearning: A Comprehensive Survey
and Taxonomy
- URL: http://arxiv.org/abs/2305.06360v6
- Date: Thu, 1 Feb 2024 01:07:22 GMT
- Title: Exploring the Landscape of Machine Unlearning: A Comprehensive Survey
and Taxonomy
- Authors: Thanveer Shaik, Xiaohui Tao, Haoran Xie, Lin Li, Xiaofeng Zhu, and
Qing Li
- Abstract summary: Machine unlearning (MU) is gaining increasing attention due to the need to remove or modify predictions made by machine learning (ML) models.
This paper presents a comprehensive survey of MU, covering current state-of-the-art techniques and approaches.
The paper also highlights the challenges that need to be addressed, including attack sophistication, standardization, transferability, interpretability, and resource constraints.
- Score: 17.535417441295074
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine unlearning (MU) is gaining increasing attention due to the need to
remove or modify predictions made by machine learning (ML) models. While
training models have become more efficient and accurate, the importance of
unlearning previously learned information has become increasingly significant
in fields such as privacy, security, and fairness. This paper presents a
comprehensive survey of MU, covering current state-of-the-art techniques and
approaches, including data deletion, perturbation, and model updates. In
addition, commonly used metrics and datasets are also presented. The paper also
highlights the challenges that need to be addressed, including attack
sophistication, standardization, transferability, interpretability, training
data, and resource constraints. The contributions of this paper include
discussions about the potential benefits of MU and its future directions.
Additionally, the paper emphasizes the need for researchers and practitioners
to continue exploring and refining unlearning techniques to ensure that ML
models can adapt to changing circumstances while maintaining user trust. The
importance of unlearning is further highlighted in making Artificial
Intelligence (AI) more trustworthy and transparent, especially with the
increasing importance of AI in various domains that involve large amounts of
personal user data.
Related papers
- Verifying Machine Unlearning with Explainable AI [46.7583989202789]
We investigate the effectiveness of Explainable AI (XAI) in verifying Machine Unlearning (MU) within context of harbor front monitoring.
Our proof-of-concept introduces attribution feature as an innovative verification step for MU, expanding beyond traditional metrics.
We propose two novel XAI-based metrics, Heatmap Coverage (HC) and Attention Shift (AS) to evaluate the effectiveness of these methods.
arXiv Detail & Related papers (2024-11-20T13:57:32Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [49.043599241803825]
Iterative Contrastive Unlearning (ICU) framework consists of three core components.
A Knowledge Unlearning Induction module removes specific knowledge through an unlearning loss.
A Contrastive Learning Enhancement module to preserve the model's expressive capabilities against the pure unlearning goal.
And an Iterative Unlearning Refinement module that dynamically assess the unlearning extent on specific data pieces and make iterative update.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Machine Unlearning for Traditional Models and Large Language Models: A Short Survey [11.539080008361662]
Machine unlearning aims to delete data and reduce its impact on models according to user requests.
This paper categorizes and investigates unlearning on both traditional models and Large Language Models (LLMs)
arXiv Detail & Related papers (2024-04-01T16:08:18Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Machine Unlearning: Taxonomy, Metrics, Applications, Challenges, and
Prospects [17.502158848870426]
Data users have been endowed with the right to be forgotten of their data.
In the course of machine learning (ML), the forgotten right requires a model provider to delete user data.
Machine unlearning emerges to address this, which has garnered ever-increasing attention from both industry and academia.
arXiv Detail & Related papers (2024-03-13T05:11:24Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Learnware: Small Models Do Big [69.88234743773113]
The prevailing big model paradigm, which has achieved impressive results in natural language processing and computer vision applications, has not yet addressed those issues, whereas becoming a serious source of carbon emissions.
This article offers an overview of the learnware paradigm, which attempts to enable users not need to build machine learning models from scratch, with the hope of reusing small models to do things even beyond their original purposes.
arXiv Detail & Related papers (2022-10-07T15:55:52Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.