Boundary Unlearning
- URL: http://arxiv.org/abs/2303.11570v1
- Date: Tue, 21 Mar 2023 03:33:18 GMT
- Title: Boundary Unlearning
- Authors: Min Chen, Weizhuo Gao, Gaoyang Liu, Kai Peng, Chen Wang
- Abstract summary: We propose Boundary Unlearning, a rapid yet effective way to unlearn an entire class from a trained machine learning model.
We extensively evaluate Boundary Unlearning on image classification and face recognition tasks, with an expected speed-up of $17times$ and $19times$, respectively.
- Score: 5.132489421775161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The practical needs of the ``right to be forgotten'' and poisoned data
removal call for efficient \textit{machine unlearning} techniques, which enable
machine learning models to unlearn, or to forget a fraction of training data
and its lineage. Recent studies on machine unlearning for deep neural networks
(DNNs) attempt to destroy the influence of the forgetting data by scrubbing the
model parameters. However, it is prohibitively expensive due to the large
dimension of the parameter space. In this paper, we refocus our attention from
the parameter space to the decision space of the DNN model, and propose
Boundary Unlearning, a rapid yet effective way to unlearn an entire class from
a trained DNN model. The key idea is to shift the decision boundary of the
original DNN model to imitate the decision behavior of the model retrained from
scratch. We develop two novel boundary shift methods, namely Boundary Shrink
and Boundary Expanding, both of which can rapidly achieve the utility and
privacy guarantees. We extensively evaluate Boundary Unlearning on CIFAR-10 and
Vggface2 datasets, and the results show that Boundary Unlearning can
effectively forget the forgetting class on image classification and face
recognition tasks, with an expected speed-up of $17\times$ and $19\times$,
respectively, compared with retraining from the scratch.
Related papers
- Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - CovarNav: Machine Unlearning via Model Inversion and Covariance
Navigation [11.222501077070765]
Machine unlearning has emerged as an essential technique to selectively remove the influence of specific training data points on trained models.
We introduce a three-step process, named CovarNav, to facilitate this forgetting.
We rigorously evaluate CovarNav on the CIFAR-10 and Vggface2 datasets.
arXiv Detail & Related papers (2023-11-21T21:19:59Z) - Update Compression for Deep Neural Networks on the Edge [33.57905298104467]
An increasing number of AI applications involve the execution of deep neural networks (DNNs) on edge devices.
Many practical reasons motivate the need to update the DNN model on the edge device post-deployment.
We develop a simple approach based on matrix factorisation to compress the model update.
arXiv Detail & Related papers (2022-03-09T04:20:43Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep
Neural Networks [7.687838702806964]
We propose an approach, dubbed as DeepObliviate, to implement machine unlearning efficiently.
Our approach improves the original training process by storing intermediate models on the hard disk.
Compared to the method of retraining from scratch, our approach can achieve 99.0%, 95.0%, 91.9%, 96.7%, 74.1% accuracy rates and 66.7$times$, 75.0$times$, 33.3$times$, 29.4$times$, 13.7$times$ speedups.
arXiv Detail & Related papers (2021-05-13T12:02:04Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Modeling Token-level Uncertainty to Learn Unknown Concepts in SLU via
Calibrated Dirichlet Prior RNN [98.4713940310056]
One major task of spoken language understanding (SLU) in modern personal assistants is to extract semantic concepts from an utterance.
Recent research collected question and answer annotated data to learn what is unknown and should be asked.
We incorporate softmax-based slot filling neural architectures to model the sequence uncertainty without question supervision.
arXiv Detail & Related papers (2020-10-16T02:12:30Z) - Training Deep Neural Networks with Constrained Learning Parameters [4.917317902787792]
A significant portion of deep learning tasks would run on edge computing systems.
We propose the Combinatorial Neural Network Training Algorithm (CoNNTrA)
CoNNTrA trains deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets.
Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.
arXiv Detail & Related papers (2020-09-01T16:20:11Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.