Towards Unbounded Machine Unlearning
- URL: http://arxiv.org/abs/2302.09880v3
- Date: Mon, 30 Oct 2023 10:14:02 GMT
- Title: Towards Unbounded Machine Unlearning
- Authors: Meghdad Kurmanji, Peter Triantafillou, Jamie Hayes, Eleni
Triantafillou
- Abstract summary: We study unlearning for different applications (RB, RC, UP), with the view that each has its own desiderata, definitions for forgetting' and associated metrics for forget quality.
For UP, we propose a novel adaptation of a strong Membership Inference Attack for unlearning.
We also propose SCRUB, a novel unlearning algorithm, which is consistently a top performer for forget quality across the different application-dependent metrics for RB, RC, and UP.
- Score: 13.31957848633701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep machine unlearning is the problem of `removing' from a trained neural
network a subset of its training set. This problem is very timely and has many
applications, including the key tasks of removing biases (RB), resolving
confusion (RC) (caused by mislabelled data in trained models), as well as
allowing users to exercise their `right to be forgotten' to protect User
Privacy (UP). This paper is the first, to our knowledge, to study unlearning
for different applications (RB, RC, UP), with the view that each has its own
desiderata, definitions for `forgetting' and associated metrics for forget
quality. For UP, we propose a novel adaptation of a strong Membership Inference
Attack for unlearning. We also propose SCRUB, a novel unlearning algorithm,
which is the only method that is consistently a top performer for forget
quality across the different application-dependent metrics for RB, RC, and UP.
At the same time, SCRUB is also consistently a top performer on metrics that
measure model utility (i.e. accuracy on retained data and generalization), and
is more efficient than previous work. The above are substantiated through a
comprehensive empirical evaluation against previous state-of-the-art.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Partially Blinded Unlearning: Class Unlearning for Deep Networks a Bayesian Perspective [4.31734012105466]
Machine Unlearning is the process of selectively discarding information designated to specific sets or classes of data from a pre-trained model.
We propose a methodology tailored for the purposeful elimination of information linked to a specific class of data from a pre-trained classification network.
Our novel approach, termed textbfPartially-Blinded Unlearning (PBU), surpasses existing state-of-the-art class unlearning methods, demonstrating superior effectiveness.
arXiv Detail & Related papers (2024-03-24T17:33:22Z) - Layer Attack Unlearning: Fast and Accurate Machine Unlearning via Layer
Level Attack and Knowledge Distillation [21.587358050012032]
We propose a fast and novel machine unlearning paradigm at the layer level called layer attack unlearning.
In this work, we introduce the Partial-PGD algorithm to locate the samples to forget efficiently.
We also use Knowledge Distillation (KD) to reliably learn the decision boundaries from the teacher.
arXiv Detail & Related papers (2023-12-28T04:38:06Z) - Task-Aware Machine Unlearning and Its Application in Load Forecasting [4.00606516946677]
This paper introduces the concept of machine unlearning which is specifically designed to remove the influence of part of the dataset on an already trained forecaster.
A performance-aware algorithm is proposed by evaluating the sensitivity of local model parameter change using influence function and sample re-weighting.
We tested the unlearning algorithms on linear, CNN, andMixer based load forecasters with a realistic load dataset.
arXiv Detail & Related papers (2023-08-28T08:50:12Z) - KGA: A General Machine Unlearning Framework Based on Knowledge Gap
Alignment [51.15802100354848]
We propose a general unlearning framework called KGA to induce forgetfulness.
Experiments on large-scale datasets show that KGA yields comprehensive improvements over baselines.
arXiv Detail & Related papers (2023-05-11T02:44:29Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z) - Evaluating Machine Unlearning via Epistemic Uncertainty [78.27542864367821]
This work presents an evaluation of Machine Unlearning algorithms based on uncertainty.
This is the first definition of a general evaluation of our best knowledge.
arXiv Detail & Related papers (2022-08-23T09:37:31Z) - Evaluating Inexact Unlearning Requires Revisiting Forgetting [14.199668091405064]
We introduce a novel test to measure the degree of forgetting called Interclass Confusion (IC)
Despite being a black-box test, IC can investigate whether information from the deletion set was erased until the early layers of the network.
We empirically show that two simple unlearning methods, exact-unlearning and catastrophic-forgetting the final k layers of a network, scale well to large deletion sets unlike prior unlearning methods.
arXiv Detail & Related papers (2022-01-17T21:49:21Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.