Machine Unlearning using Forgetting Neural Networks
- URL: http://arxiv.org/abs/2410.22374v1
- Date: Tue, 29 Oct 2024 02:52:26 GMT
- Title: Machine Unlearning using Forgetting Neural Networks
- Authors: Amartya Hatua, Trung T. Nguyen, Filip Cano, Andrew H. Sung,
- Abstract summary: This paper presents a new approach to machine unlearning using forgetting neural networks (FNN)
FNNs are neural networks with specific forgetting layers, that take inspiration from the processes involved when a human brain forgets.
We report our results on the MNIST handwritten digit recognition and fashion datasets.
- Score: 0.0
- License:
- Abstract: Modern computer systems store vast amounts of personal data, enabling advances in AI and ML but risking user privacy and trust. For privacy reasons, it is desired sometimes for an ML model to forget part of the data it was trained on. This paper presents a new approach to machine unlearning using forgetting neural networks (FNN). FNNs are neural networks with specific forgetting layers, that take inspiration from the processes involved when a human brain forgets. While FNNs had been proposed as a theoretical construct, they have not been previously used as a machine unlearning method. We describe four different types of forgetting layers and study their properties. In our experimental evaluation, we report our results on the MNIST handwritten digit recognition and fashion datasets. The effectiveness of the unlearned models was tested using Membership Inference Attacks (MIA). Successful experimental results demonstrate the great potential of our proposed method for dealing with the machine unlearning problem.
Related papers
- Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Data Isotopes for Data Provenance in DNNs [27.549744883427376]
We show how users can create special data points we call isotopes, which introduce "spurious features" into DNNs during training.
A user can apply statistical hypothesis testing to detect if a model has learned the spurious features associated with their isotopes by training on the user's data.
Our results confirm efficacy in multiple settings, detecting and distinguishing between hundreds of isotopes with high accuracy.
arXiv Detail & Related papers (2022-08-29T21:28:35Z) - Reconstructing Training Data from Trained Neural Networks [42.60217236418818]
We show in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier.
We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods.
arXiv Detail & Related papers (2022-06-15T18:35:16Z) - The ideal data compression and automatic discovery of hidden law using
neural network [0.0]
We consider how the human brain recognizes events and memorizes them.
We reproduce the system of the human brain on a machine learning model with a new autoencoder neural network.
arXiv Detail & Related papers (2022-03-31T10:55:24Z) - Lightweight machine unlearning in neural network [2.406359246841227]
"Right to be forgotten" was introduced in a timely manner, stipulating that individuals have the right to withdraw their consent based on their consent.
To solve this problem, machine unlearning is proposed, which allows the model to erase all memory of private information.
Our method is 15 times faster than retraining.
arXiv Detail & Related papers (2021-11-10T04:48:31Z) - What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space [88.37185513453758]
We propose a method to visualize and understand the class-wise knowledge learned by deep neural networks (DNNs) under different settings.
Our method searches for a single predictive pattern in the pixel space to represent the knowledge learned by the model for each class.
In the adversarial setting, we show that adversarially trained models tend to learn more simplified shape patterns.
arXiv Detail & Related papers (2021-01-18T06:38:41Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z) - Biologically-Motivated Deep Learning Method using Hierarchical
Competitive Learning [0.0]
I propose to introduce unsupervised competitive learning which only requires forward propagating signals as a pre-training method for CNNs.
The proposed method could be useful for a variety of poorly labeled data, for example, time series or medical data.
arXiv Detail & Related papers (2020-01-04T20:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.