Towards Bridging the Gaps between the Right to Explanation and the Right
to be Forgotten
- URL: http://arxiv.org/abs/2302.04288v2
- Date: Fri, 10 Feb 2023 03:24:50 GMT
- Title: Towards Bridging the Gaps between the Right to Explanation and the Right
to be Forgotten
- Authors: Satyapriya Krishna, Jiaqi Ma, Himabindu Lakkaraju
- Abstract summary: The right to explanation allows individuals to request an actionable explanation for an algorithmic decision.
The right to be forgotten grants them the right to ask for their data to be deleted from all the databases and models of an organization.
We propose the first algorithmic framework to resolve the tension between the two principles.
- Score: 14.636997283608414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Right to Explanation and the Right to be Forgotten are two important
principles outlined to regulate algorithmic decision making and data usage in
real-world applications. While the right to explanation allows individuals to
request an actionable explanation for an algorithmic decision, the right to be
forgotten grants them the right to ask for their data to be deleted from all
the databases and models of an organization. Intuitively, enforcing the right
to be forgotten may trigger model updates which in turn invalidate previously
provided explanations, thus violating the right to explanation. In this work,
we investigate the technical implications arising due to the interference
between the two aforementioned regulatory principles, and propose the first
algorithmic framework to resolve the tension between them. To this end, we
formulate a novel optimization problem to generate explanations that are robust
to model updates due to the removal of training data instances by data deletion
requests. We then derive an efficient approximation algorithm to handle the
combinatorial complexity of this optimization problem. We theoretically
demonstrate that our method generates explanations that are provably robust to
worst-case data deletion requests with bounded costs in case of linear models
and certain classes of non-linear models. Extensive experimentation with
real-world datasets demonstrates the efficacy of the proposed framework.
Related papers
- EraseDiff: Erasing Data Influence in Diffusion Models [51.225365010401006]
We introduce EraseDiff, an unlearning algorithm to address concerns related to data memorization.
Our approach formulates the unlearning task as a constrained optimization problem.
We show that EraseDiff effectively preserves the model's utility, efficacy, and efficiency.
arXiv Detail & Related papers (2024-01-11T09:30:36Z) - Efficient Alternating Minimization Solvers for Wyner Multi-View
Unsupervised Learning [0.0]
We propose two novel formulations that enable the development of computational efficient solvers based the alternating principle.
The proposed solvers offer computational efficiency, theoretical convergence guarantees, local minima complexity with the number of views, and exceptional accuracy as compared with the state-of-the-art techniques.
arXiv Detail & Related papers (2023-03-28T10:17:51Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Efficient Learning of Decision-Making Models: A Penalty Block Coordinate
Descent Algorithm for Data-Driven Inverse Optimization [12.610576072466895]
We consider the inverse problem where we use prior decision data to uncover the underlying decision-making process.
This statistical learning problem is referred to as data-driven inverse optimization.
We propose an efficient block coordinate descent-based algorithm to solve large problem instances.
arXiv Detail & Related papers (2022-10-27T12:52:56Z) - On the Trade-Off between Actionable Explanations and the Right to be
Forgotten [21.26254644739585]
We study the problem of recourse invalidation in the context of data deletion requests.
We show that the removal of as little as 2 data instances from the training set can invalidate up to 95 percent of all recourses output by popular state-of-the-art algorithms.
arXiv Detail & Related papers (2022-08-30T10:35:32Z) - Learning to Limit Data Collection via Scaling Laws: Data Minimization
Compliance in Practice [62.44110411199835]
We build on literature in machine learning law to propose framework for limiting collection based on data interpretation that ties data to system performance.
We formalize a data minimization criterion based on performance curve derivatives and provide an effective and interpretable piecewise power law technique.
arXiv Detail & Related papers (2021-07-16T19:59:01Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Privacy-Preserving Gaussian Process Regression -- A Modular Approach to
the Application of Homomorphic Encryption [4.1499725848998965]
Homomorphic encryption (FHE) allows data to be computed on whilst encrypted.
Some commonly used machine learning algorithms, such as Gaussian process regression, are poorly suited to FHE.
We show that a modular approach, which applies FHE to only the sensitive steps of a workflow that need protection, allows one party to make predictions on their data.
arXiv Detail & Related papers (2020-01-28T11:50:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.