Loss-Free Machine Unlearning
- URL: http://arxiv.org/abs/2402.19308v1
- Date: Thu, 29 Feb 2024 16:15:34 GMT
- Title: Loss-Free Machine Unlearning
- Authors: Jack Foster, Stefan Schoepf, Alexandra Brintrup
- Abstract summary: We present a machine unlearning approach that is both retraining- and label-free.
Retraining-free approaches often utilise Fisher information, which is derived from the loss and requires labelled data which may not be available.
We present an extension to the Selective Synaptic Dampening algorithm, substituting the diagonal of the Fisher information matrix for the gradient of the l2 norm of the model output to approximate sensitivity.
- Score: 51.34904967046097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a machine unlearning approach that is both retraining- and
label-free. Most existing machine unlearning approaches require a model to be
fine-tuned to remove information while preserving performance. This is
computationally expensive and necessitates the storage of the whole dataset for
the lifetime of the model. Retraining-free approaches often utilise Fisher
information, which is derived from the loss and requires labelled data which
may not be available. Thus, we present an extension to the Selective Synaptic
Dampening algorithm, substituting the diagonal of the Fisher information matrix
for the gradient of the l2 norm of the model output to approximate sensitivity.
We evaluate our method in a range of experiments using ResNet18 and Vision
Transformer. Results show our label-free method is competitive with existing
state-of-the-art approaches.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Towards Aligned Data Removal via Twin Machine Unlearning [30.070660418732807]
Modern privacy regulations have spurred the evolution of machine unlearning.
We present a Twin Machine Unlearning (TMU) approach, where a twin unlearning problem is defined corresponding to the original unlearning problem.
Our approach significantly enhances the alignment between the unlearned model and the gold model.
arXiv Detail & Related papers (2024-08-21T08:42:21Z) - Goldfish: An Efficient Federated Unlearning Framework [3.956103498302838]
Goldfish is a new framework for machine unlearning algorithms.
It comprises four modules: basic model, loss function, optimization, and extension.
To address the challenge of low validity in existing machine unlearning algorithms, we propose a novel loss function.
arXiv Detail & Related papers (2024-04-04T03:29:41Z) - Partially Blinded Unlearning: Class Unlearning for Deep Networks a Bayesian Perspective [4.31734012105466]
Machine Unlearning is the process of selectively discarding information designated to specific sets or classes of data from a pre-trained model.
We propose a methodology tailored for the purposeful elimination of information linked to a specific class of data from a pre-trained classification network.
Our novel approach, termed textbfPartially-Blinded Unlearning (PBU), surpasses existing state-of-the-art class unlearning methods, demonstrating superior effectiveness.
arXiv Detail & Related papers (2024-03-24T17:33:22Z) - An Information Theoretic Approach to Machine Unlearning [45.600917449314444]
Key challenge in unlearning is forgetting the necessary data in a timely manner, while preserving model performance.
In this work, we address the zero-shot unlearning scenario, whereby an unlearning algorithm must be able to remove data given only a trained model and the data to be forgotten.
We derive a simple but principled zero-shot unlearning method based on the geometry of the model.
arXiv Detail & Related papers (2024-02-02T13:33:30Z) - Dataset Condensation Driven Machine Unlearning [0.0]
Current trend in data regulation requirements and privacy-preserving machine learning has emphasized the importance of machine unlearning.
We propose new dataset condensation techniques and an innovative unlearning scheme that strikes a balance between machine unlearning privacy, utility, and efficiency.
We present a novel and effective approach to instrumenting machine unlearning and propose its application in defending against membership inference and model inversion attacks.
arXiv Detail & Related papers (2024-01-31T21:48:25Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Fast Machine Unlearning Without Retraining Through Selective Synaptic
Dampening [51.34904967046097]
Selective Synaptic Dampening (SSD) is a fast, performant, and does not require long-term storage of the training data.
We present a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data.
arXiv Detail & Related papers (2023-08-15T11:30:45Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.