On the Trade-Off between Actionable Explanations and the Right to be
Forgotten
- URL: http://arxiv.org/abs/2208.14137v3
- Date: Wed, 11 Oct 2023 15:34:51 GMT
- Title: On the Trade-Off between Actionable Explanations and the Right to be
Forgotten
- Authors: Martin Pawelczyk and Tobias Leemann and Asia Biega and Gjergji Kasneci
- Abstract summary: We study the problem of recourse invalidation in the context of data deletion requests.
We show that the removal of as little as 2 data instances from the training set can invalidate up to 95 percent of all recourses output by popular state-of-the-art algorithms.
- Score: 21.26254644739585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning (ML) models are increasingly being deployed in
high-stakes applications, policymakers have suggested tighter data protection
regulations (e.g., GDPR, CCPA). One key principle is the "right to be
forgotten" which gives users the right to have their data deleted. Another key
principle is the right to an actionable explanation, also known as algorithmic
recourse, allowing users to reverse unfavorable decisions. To date, it is
unknown whether these two principles can be operationalized simultaneously.
Therefore, we introduce and study the problem of recourse invalidation in the
context of data deletion requests. More specifically, we theoretically and
empirically analyze the behavior of popular state-of-the-art algorithms and
demonstrate that the recourses generated by these algorithms are likely to be
invalidated if a small number of data deletion requests (e.g., 1 or 2) warrant
updates of the predictive model. For the setting of differentiable models, we
suggest a framework to identify a minimal subset of critical training points
which, when removed, maximize the fraction of invalidated recourses. Using our
framework, we empirically show that the removal of as little as 2 data
instances from the training set can invalidate up to 95 percent of all
recourses output by popular state-of-the-art algorithms. Thus, our work raises
fundamental questions about the compatibility of "the right to an actionable
explanation" in the context of the "right to be forgotten", while also
providing constructive insights on the determining factors of recourse
robustness.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - MUSE: Machine Unlearning Six-Way Evaluation for Language Models [109.76505405962783]
Language models (LMs) are trained on vast amounts of text data, which may include private and copyrighted content.
We propose MUSE, a comprehensive machine unlearning evaluation benchmark.
We benchmark how effectively eight popular unlearning algorithms can unlearn Harry Potter books and news articles.
arXiv Detail & Related papers (2024-07-08T23:47:29Z) - Counterfactual Explanations via Locally-guided Sequential Algorithmic
Recourse [13.95253855760017]
We introduce LocalFACE, a model-agnostic technique that composes feasible and actionable counterfactual explanations.
Our explainer preserves the privacy of users by only leveraging data that it specifically requires to construct actionable algorithmic recourse.
arXiv Detail & Related papers (2023-09-08T08:47:23Z) - Towards Bridging the Gaps between the Right to Explanation and the Right
to be Forgotten [14.636997283608414]
The right to explanation allows individuals to request an actionable explanation for an algorithmic decision.
The right to be forgotten grants them the right to ask for their data to be deleted from all the databases and models of an organization.
We propose the first algorithmic framework to resolve the tension between the two principles.
arXiv Detail & Related papers (2023-02-08T19:03:00Z) - Learning to Unlearn: Instance-wise Unlearning for Pre-trained
Classifiers [71.70205894168039]
We consider instance-wise unlearning, of which the goal is to delete information on a set of instances from a pre-trained model.
We propose two methods that reduce forgetting on the remaining data: 1) utilizing adversarial examples to overcome forgetting at the representation-level and 2) leveraging weight importance metrics to pinpoint network parameters guilty of propagating unwanted information.
arXiv Detail & Related papers (2023-01-27T07:53:50Z) - Knowledge is Power: Understanding Causality Makes Legal judgment
Prediction Models More Generalizable and Robust [3.555105847974074]
Legal Judgment Prediction (LJP) serves as legal assistance to mitigate the great work burden of limited legal practitioners.
Most existing methods apply various large-scale pre-trained language models finetuned in LJP tasks to obtain consistent improvements.
We discover that the state-of-the-art (SOTA) model makes judgment predictions according to irrelevant (or non-casual) information.
arXiv Detail & Related papers (2022-11-06T07:03:31Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z) - Evaluating Machine Unlearning via Epistemic Uncertainty [78.27542864367821]
This work presents an evaluation of Machine Unlearning algorithms based on uncertainty.
This is the first definition of a general evaluation of our best knowledge.
arXiv Detail & Related papers (2022-08-23T09:37:31Z) - Robust Predictable Control [149.71263296079388]
We show that our method achieves much tighter compression than prior methods, achieving up to 5x higher reward than a standard information bottleneck.
We also demonstrate that our method learns policies that are more robust and generalize better to new tasks.
arXiv Detail & Related papers (2021-09-07T17:29:34Z) - Bounding Information Leakage in Machine Learning [26.64770573405079]
This paper investigates fundamental bounds on information leakage.
We identify and bound the success rate of the worst-case membership inference attack.
We derive bounds on the mutual information between the sensitive attributes and model parameters.
arXiv Detail & Related papers (2021-05-09T08:49:14Z) - Algorithmic recourse under imperfect causal knowledge: a probabilistic
approach [15.124107808802703]
We show that it is impossible to guarantee recourse without access to the true structural equations.
We propose two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge.
arXiv Detail & Related papers (2020-06-11T21:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.