When Machine Unlearning Jeopardizes Privacy
- URL: http://arxiv.org/abs/2005.02205v2
- Date: Tue, 14 Sep 2021 17:59:41 GMT
- Title: When Machine Unlearning Jeopardizes Privacy
- Authors: Min Chen and Zhikun Zhang and Tianhao Wang and Michael Backes and
Mathias Humbert and Yang Zhang
- Abstract summary: We investigate the unintended information leakage caused by machine unlearning.
We propose a novel membership inference attack that achieves strong performance.
Our results can help improve privacy protection in practical implementations of machine unlearning.
- Score: 25.167214892258567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The right to be forgotten states that a data owner has the right to erase
their data from an entity storing it. In the context of machine learning (ML),
the right to be forgotten requires an ML model owner to remove the data owner's
data from the training set used to build the ML model, a process known as
machine unlearning. While originally designed to protect the privacy of the
data owner, we argue that machine unlearning may leave some imprint of the data
in the ML model and thus create unintended privacy risks. In this paper, we
perform the first study on investigating the unintended information leakage
caused by machine unlearning. We propose a novel membership inference attack
that leverages the different outputs of an ML model's two versions to infer
whether a target sample is part of the training set of the original model but
out of the training set of the corresponding unlearned model. Our experiments
demonstrate that the proposed membership inference attack achieves strong
performance. More importantly, we show that our attack in multiple cases
outperforms the classical membership inference attack on the original ML model,
which indicates that machine unlearning can have counterproductive effects on
privacy. We notice that the privacy degradation is especially significant for
well-generalized ML models where classical membership inference does not
perform well. We further investigate four mechanisms to mitigate the newly
discovered privacy risks and show that releasing the predicted label only,
temperature scaling, and differential privacy are effective. We believe that
our results can help improve privacy protection in practical implementations of
machine unlearning. Our code is available at
https://github.com/MinChen00/UnlearningLeaks.
Related papers
- Game-Theoretic Machine Unlearning: Mitigating Extra Privacy Leakage [12.737028324709609]
Recent legislation obligates organizations to remove requested data and its influence from a trained model.
We propose a game-theoretic machine unlearning algorithm that simulates the competitive relationship between unlearning performance and privacy protection.
arXiv Detail & Related papers (2024-11-06T13:47:04Z) - Privacy Side Channels in Machine Learning Systems [87.53240071195168]
We introduce privacy side channels: attacks that exploit system-level components to extract private information.
For example, we show that deduplicating training data before applying differentially-private training creates a side-channel that completely invalidates any provable privacy guarantees.
We further show that systems which block language models from regenerating training data can be exploited to exfiltrate private keys contained in the training set.
arXiv Detail & Related papers (2023-09-11T16:49:05Z) - AI Model Disgorgement: Methods and Choices [127.54319351058167]
We introduce a taxonomy of possible disgorgement methods that are applicable to modern machine learning systems.
We investigate the meaning of "removing the effects" of data in the trained model in a way that does not require retraining from scratch.
arXiv Detail & Related papers (2023-04-07T08:50:18Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - A Survey on Differential Privacy with Machine Learning and Future
Outlook [0.0]
differential privacy is used to protect machine learning models from any attacks and vulnerabilities.
This survey paper presents different differentially private machine learning algorithms categorized into two main categories.
arXiv Detail & Related papers (2022-11-19T14:20:53Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - Zero-Shot Machine Unlearning [6.884272840652062]
Modern privacy regulations grant citizens the right to be forgotten by products, services and companies.
No data related to the training process or training samples may be accessible for the unlearning purpose.
We propose two novel solutions for zero-shot machine unlearning based on (a) error minimizing-maximizing noise and (b) gated knowledge transfer.
arXiv Detail & Related papers (2022-01-14T19:16:09Z) - Machine unlearning via GAN [2.406359246841227]
Machine learning models, especially deep models, may unintentionally remember information about their training data.
We present a GAN-based algorithm to delete data in deep models, which significantly improves deleting speed compared to retraining from scratch.
arXiv Detail & Related papers (2021-11-22T05:28:57Z) - Amnesiac Machine Learning [15.680008735220785]
Recently enacted General Data Protection Regulation affects any data holder that has data on European Union residents.
Models are vulnerable to information leaking attacks such as model inversion attacks.
We present two data removal methods, namely Unlearning and Amnesiac Unlearning, that enable model owners to protect themselves against such attacks while being compliant with regulations.
arXiv Detail & Related papers (2020-10-21T13:14:17Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.