On the Privacy Risks of Algorithmic Recourse
- URL: http://arxiv.org/abs/2211.05427v1
- Date: Thu, 10 Nov 2022 09:04:24 GMT
- Title: On the Privacy Risks of Algorithmic Recourse
- Authors: Martin Pawelczyk and Himabindu Lakkaraju and Seth Neel
- Abstract summary: We make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model's training data.
Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.
- Score: 17.33484111779023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As predictive models are increasingly being employed to make consequential
decisions, there is a growing emphasis on developing techniques that can
provide algorithmic recourse to affected individuals. While such recourses can
be immensely beneficial to affected individuals, potential adversaries could
also exploit these recourses to compromise privacy. In this work, we make the
first attempt at investigating if and how an adversary can leverage recourses
to infer private information about the underlying model's training data. To
this end, we propose a series of novel membership inference attacks which
leverage algorithmic recourse. More specifically, we extend the prior
literature on membership inference attacks to the recourse setting by
leveraging the distances between data instances and their corresponding
counterfactuals output by state-of-the-art recourse methods. Extensive
experimentation with real world and synthetic datasets demonstrates significant
privacy leakage through recourses. Our work establishes unintended privacy
leakage as an important risk in the widespread adoption of recourse methods.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Visual Privacy Auditing with Diffusion Models [52.866433097406656]
We propose a reconstruction attack based on diffusion models (DMs) that assumes adversary access to real-world image priors.
We show that (1) real-world data priors significantly influence reconstruction success, (2) current reconstruction bounds do not model the risk posed by data priors well, and (3) DMs can serve as effective auditing tools for visualizing privacy leakage.
arXiv Detail & Related papers (2024-03-12T12:18:55Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Counterfactual Explanations via Locally-guided Sequential Algorithmic
Recourse [13.95253855760017]
We introduce LocalFACE, a model-agnostic technique that composes feasible and actionable counterfactual explanations.
Our explainer preserves the privacy of users by only leveraging data that it specifically requires to construct actionable algorithmic recourse.
arXiv Detail & Related papers (2023-09-08T08:47:23Z) - Re-thinking Data Availablity Attacks Against Deep Neural Networks [53.64624167867274]
In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective.
We introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements.
arXiv Detail & Related papers (2023-05-18T04:03:51Z) - A General Framework for Auditing Differentially Private Machine Learning [27.99806936918949]
We present a framework to statistically audit the privacy guarantee conferred by a differentially private machine learner in practice.
Our work develops a general methodology to empirically evaluate the privacy of differentially private machine learning implementations.
arXiv Detail & Related papers (2022-10-16T21:34:18Z) - Collaborative Drug Discovery: Inference-level Data Protection
Perspective [2.624902795082451]
Pharmaceutical industry can better leverage its data assets to virtualize drug discovery through a collaborative machine learning platform.
There are non-negligible risks stemming from the unintended leakage of participants' training data.
This paper describes a privacy risk assessment for collaborative modeling in the preclinical phase of drug discovery to accelerate the selection of promising drug candidates.
arXiv Detail & Related papers (2022-05-13T08:30:50Z) - Federated Deep Learning with Bayesian Privacy [28.99404058773532]
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users.
Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads.
Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity.
arXiv Detail & Related papers (2021-09-27T12:48:40Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.