Fair Machine Unlearning: Data Removal while Mitigating Disparities
- URL: http://arxiv.org/abs/2307.14754v2
- Date: Fri, 16 Feb 2024 01:42:48 GMT
- Title: Fair Machine Unlearning: Data Removal while Mitigating Disparities
- Authors: Alex Oesterling, Jiaqi Ma, Flavio P. Calmon, Hima Lakkaraju
- Abstract summary: Right to be forgotten is core principle outlined by EU's General Regulation.
"Forgetting" can be naively achieved by retraining on remaining data.
"Unlearning" impacts other properties critical to real-world applications such as fairness.
- Score: 5.724350004671127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Right to be Forgotten is a core principle outlined by regulatory
frameworks such as the EU's General Data Protection Regulation (GDPR). This
principle allows individuals to request that their personal data be deleted
from deployed machine learning models. While "forgetting" can be naively
achieved by retraining on the remaining dataset, it is computationally
expensive to do to so with each new request. As such, several machine
unlearning methods have been proposed as efficient alternatives to retraining.
These methods aim to approximate the predictive performance of retraining, but
fail to consider how unlearning impacts other properties critical to real-world
applications such as fairness. In this work, we demonstrate that most efficient
unlearning methods cannot accommodate popular fairness interventions, and we
propose the first fair machine unlearning method that can efficiently unlearn
data instances from a fair objective. We derive theoretical results which
demonstrate that our method can provably unlearn data and provably maintain
fairness performance. Extensive experimentation with real-world datasets
highlight the efficacy of our method at unlearning data instances while
preserving fairness.
Related papers
- RESTOR: Knowledge Recovery through Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can memorize undesirable datapoints.
Many machine unlearning methods have been proposed that aim to 'erase' these datapoints from trained models.
We propose the RESTOR framework for machine unlearning based on the following dimensions.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Debiasing Machine Unlearning with Counterfactual Examples [31.931056076782202]
We analyze the causal factors behind the unlearning process and mitigate biases at both data and algorithmic levels.
We introduce an intervention-based approach, where knowledge to forget is erased with a debiased dataset.
Our method outperforms existing machine unlearning baselines on evaluation metrics.
arXiv Detail & Related papers (2024-04-24T09:33:10Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Machine unlearning through fine-grained model parameters perturbation [26.653596302257057]
We propose fine-grained Top-K and Random-k parameters perturbed inexact machine unlearning strategies.
We also tackle the challenge of evaluating the effectiveness of machine unlearning.
arXiv Detail & Related papers (2024-01-09T07:14:45Z) - Unlearn What You Want to Forget: Efficient Unlearning for LLMs [92.51670143929056]
Large language models (LLMs) have achieved significant progress from pre-training on and memorizing a wide range of textual data.
This process might suffer from privacy issues and violations of data protection regulations.
We propose an efficient unlearning framework that could efficiently update LLMs without having to retrain the whole model after data removals.
arXiv Detail & Related papers (2023-10-31T03:35:59Z) - Random Relabeling for Efficient Machine Unlearning [8.871042314510788]
Individuals' right to retract personal data and relevant data privacy regulations pose great challenges to machine learning.
We propose unlearning scheme random relabeling to efficiently deal with sequential data removal requests.
A less constraining removal certification method based on probability distribution similarity with naive unlearning is also proposed.
arXiv Detail & Related papers (2023-05-21T02:37:26Z) - Fairness-Aware Data Valuation for Supervised Learning [4.874780144224057]
We propose Fairness-Aware Data vauatiOn (FADO) to incorporate fairness concerns into a series of ML-related tasks.
We show how FADO can be applied as the basis for unfairness mitigation pre-processing techniques.
Our methods achieve promising results -- up to a 40 p.p. improvement in fairness at a less than 1 p.p. loss in performance compared to a baseline.
arXiv Detail & Related papers (2023-03-29T18:51:13Z) - To Be Forgotten or To Be Fair: Unveiling Fairness Implications of
Machine Unlearning Methods [31.83413902054534]
We present the first study on machine unlearning methods to reveal their fairness implications.
Under non-uniform data deletion, SISA leads to better fairness compared with ORTR and AmnesiacML, while initial training and uniform data deletion do not necessarily affect the fairness of all three methods.
arXiv Detail & Related papers (2023-02-07T09:48:29Z) - Learning to Unlearn: Instance-wise Unlearning for Pre-trained
Classifiers [71.70205894168039]
We consider instance-wise unlearning, of which the goal is to delete information on a set of instances from a pre-trained model.
We propose two methods that reduce forgetting on the remaining data: 1) utilizing adversarial examples to overcome forgetting at the representation-level and 2) leveraging weight importance metrics to pinpoint network parameters guilty of propagating unwanted information.
arXiv Detail & Related papers (2023-01-27T07:53:50Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.