Evaluating Machine Unlearning via Epistemic Uncertainty
- URL: http://arxiv.org/abs/2208.10836v1
- Date: Tue, 23 Aug 2022 09:37:31 GMT
- Title: Evaluating Machine Unlearning via Epistemic Uncertainty
- Authors: Alexander Becker, Thomas Liebig
- Abstract summary: This work presents an evaluation of Machine Unlearning algorithms based on uncertainty.
This is the first definition of a general evaluation of our best knowledge.
- Score: 78.27542864367821
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: There has been a growing interest in Machine Unlearning recently, primarily
due to legal requirements such as the General Data Protection Regulation (GDPR)
and the California Consumer Privacy Act. Thus, multiple approaches were
presented to remove the influence of specific target data points from a trained
model. However, when evaluating the success of unlearning, current approaches
either use adversarial attacks or compare their results to the optimal
solution, which usually incorporates retraining from scratch. We argue that
both ways are insufficient in practice. In this work, we present an evaluation
metric for Machine Unlearning algorithms based on epistemic uncertainty. This
is the first definition of a general evaluation metric for Machine Unlearning
to our best knowledge.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - RESTOR: Knowledge Recovery through Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can memorize undesirable datapoints.
Many machine unlearning methods have been proposed that aim to 'erase' these datapoints from trained models.
We propose the RESTOR framework for machine unlearning based on the following dimensions.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Adversarial Machine Unlearning [26.809123658470693]
This paper focuses on the challenge of machine unlearning, aiming to remove the influence of specific training data on machine learning models.
Traditionally, the development of unlearning algorithms runs parallel with that of membership inference attacks (MIA), a type of privacy threat.
We propose a game-theoretic framework that integrates MIAs into the design of unlearning algorithms.
arXiv Detail & Related papers (2024-06-11T20:07:22Z) - Gone but Not Forgotten: Improved Benchmarks for Machine Unlearning [0.0]
We describe and propose alternative evaluation methods for machine unlearning algorithms.
We show the utility of our alternative evaluations via a series of experiments of state-of-the-art unlearning algorithms on different computer vision datasets.
arXiv Detail & Related papers (2024-05-29T15:53:23Z) - Towards Reliable Empirical Machine Unlearning Evaluation: A Game-Theoretic View [5.724350004671127]
We propose a game-theoretic framework that formalizes the evaluation process as a game between unlearning algorithms and MIA adversaries.
We show that the evaluation metric induced from the game enjoys provable guarantees that the existing evaluation metrics fail to satisfy.
This work presents a novel and reliable approach to empirically evaluating unlearning algorithms, paving the way for the development of more effective unlearning techniques.
arXiv Detail & Related papers (2024-04-17T17:20:27Z) - Machine unlearning through fine-grained model parameters perturbation [26.653596302257057]
We propose fine-grained Top-K and Random-k parameters perturbed inexact machine unlearning strategies.
We also tackle the challenge of evaluating the effectiveness of machine unlearning.
arXiv Detail & Related papers (2024-01-09T07:14:45Z) - Towards Unbounded Machine Unlearning [13.31957848633701]
We study unlearning for different applications (RB, RC, UP), with the view that each has its own desiderata, definitions for forgetting' and associated metrics for forget quality.
For UP, we propose a novel adaptation of a strong Membership Inference Attack for unlearning.
We also propose SCRUB, a novel unlearning algorithm, which is consistently a top performer for forget quality across the different application-dependent metrics for RB, RC, and UP.
arXiv Detail & Related papers (2023-02-20T10:15:36Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.