One-Shot Machine Unlearning with Mnemonic Code
- URL: http://arxiv.org/abs/2306.05670v1
- Date: Fri, 9 Jun 2023 04:59:24 GMT
- Title: One-Shot Machine Unlearning with Mnemonic Code
- Authors: Tomoya Yamashita and Masanori Yamada and Takashi Shibata
- Abstract summary: Machine unlearning (MU) aims at forgetting about undesirable training data from a trained deep learning model.
A naive MU approach is to re-train the whole model with the training data from which the undesirable data has been removed.
We propose a one-shot MU method, which does not need additional training.
- Score: 5.579745503613096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has achieved significant improvements in accuracy and has been
applied to various fields. With the spread of deep learning, a new problem has
also emerged; deep learning models can sometimes have undesirable information
from an ethical standpoint. This problem must be resolved if deep learning is
to make sensitive decisions such as hiring and prison sentencing. Machine
unlearning (MU) is the research area that responds to such demands. MU aims at
forgetting about undesirable training data from a trained deep learning model.
A naive MU approach is to re-train the whole model with the training data from
which the undesirable data has been removed. However, re-training the whole
model can take a huge amount of time and consumes significant computer
resources. To make MU even more practical, a simple-yet-effective MU method is
required. In this paper, we propose a one-shot MU method, which does not need
additional training. To design one-shot MU, we add noise to the model
parameters that are sensitive to undesirable information. In our proposed
method, we use the Fisher information matrix (FIM) to estimate the sensitive
model parameters. Training data were usually used to evaluate the FIM in
existing methods. In contrast, we avoid the need to retain the training data
for calculating the FIM by using class-specific synthetic signals called
mnemonic code. Extensive experiments using artificial and natural datasets
demonstrate that our method outperforms the existing methods.
Related papers
- Efficient Machine Unlearning via Influence Approximation [75.31015485113993]
Influence-based unlearning has emerged as a prominent approach to estimate the impact of individual training samples on model parameters without retraining.<n>This paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning)<n>We introduce the Influence Approximation Unlearning algorithm for efficient machine unlearning from the incremental perspective.
arXiv Detail & Related papers (2025-07-31T05:34:27Z) - Sharpness-Aware Parameter Selection for Machine Unlearning [6.397490580631141]
It often happens that some sensitive personal information, such as credit card numbers or passwords, are mistakenly incorporated in the training of machine learning models and need to be removed afterwards.
There have been various machine unlearning techniques proposed in the literature to address this problem.
Most of the proposed methods revolve around removing individual data samples from a trained model.
While the existing methods for these tasks do the unlearning task by updating the whole set of model parameters or only the last layer of the model, we show that there are a subset of model parameters that have the largest contribution in the unlearning target features.
arXiv Detail & Related papers (2025-04-08T19:41:07Z) - Learning with Less: Knowledge Distillation from Large Language Models via Unlabeled Data [54.934578742209716]
In real-world NLP applications, Large Language Models (LLMs) offer promising solutions due to their extensive training on vast datasets.
LLKD is an adaptive sample selection method that incorporates signals from both the teacher and student.
Our comprehensive experiments show that LLKD achieves superior performance across various datasets with higher data efficiency.
arXiv Detail & Related papers (2024-11-12T18:57:59Z) - Mitigating Memorization In Language Models [37.899013074095336]
Language models (LMs) can "memorize" information, encode training data in their weights in such a way that inference-time queries can lead to verbatim regurgitation of that data.
We introduce TinyMem, a suite of small, computationally-efficient LMs for the rapid development and evaluation of memorization-mitigation methods.
We show, in particular, that our proposed unlearning method BalancedSubnet outperforms other mitigation methods at removing memorized information while preserving performance on target tasks.
arXiv Detail & Related papers (2024-10-03T02:53:51Z) - Deep Unlearn: Benchmarking Machine Unlearning [7.450700594277741]
Machine unlearning (MU) aims to remove the influence of particular data points from the learnable parameters of a trained machine learning model.
This paper investigates 18 state-of-the-art MU methods across various benchmark datasets and models.
arXiv Detail & Related papers (2024-10-02T06:41:58Z) - Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs [25.91643745340183]
Large Language Models (LLMs) have demonstrated strong reasoning and memorization capabilities via pretraining on massive textual corpora.
This poses risk of privacy and copyright violations, highlighting the need for efficient machine unlearning methods.
We propose Low-rank Knowledge Unlearning (LoKU), a novel framework that enables robust and efficient unlearning for LLMs.
arXiv Detail & Related papers (2024-08-13T04:18:32Z) - Robust Machine Learning by Transforming and Augmenting Imperfect
Training Data [6.928276018602774]
This thesis explores several data sensitivities of modern machine learning.
We first discuss how to prevent ML from codifying prior human discrimination measured in the training data.
We then discuss the problem of learning from data containing spurious features, which provide predictive fidelity during training but are unreliable upon deployment.
arXiv Detail & Related papers (2023-12-19T20:49:28Z) - Fast Machine Unlearning Without Retraining Through Selective Synaptic
Dampening [51.34904967046097]
Selective Synaptic Dampening (SSD) is a fast, performant, and does not require long-term storage of the training data.
We present a novel two-step, post hoc, retrain-free approach to machine unlearning which is fast, performant, and does not require long-term storage of the training data.
arXiv Detail & Related papers (2023-08-15T11:30:45Z) - Data efficient surrogate modeling for engineering design: Ensemble-free
batch mode deep active learning for regression [0.6021787236982659]
We propose a simple and scalable approach for active learning that works in a student-teacher manner to train a surrogate model.
By using this proposed approach, we are able to achieve the same level of surrogate accuracy as the other baselines like DBAL and Monte Carlo sampling.
arXiv Detail & Related papers (2022-11-16T02:31:57Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Light-weight Deformable Registration using Adversarial Learning with
Distilling Knowledge [17.475408305030278]
We introduce a new Light-weight Deformable Registration network that significantly reduces the computational cost while achieving competitive accuracy.
In particular, we propose a new adversarial learning with distilling knowledge algorithm that successfully leverages meaningful information from the effective but expensive teacher network to the student network.
The extensively experimental results on different public datasets show that our proposed method achieves state-of-the-art accuracy while significantly faster than recent methods.
arXiv Detail & Related papers (2021-10-04T09:59:01Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.