Bounding Training Data Reconstruction in Private (Deep) Learning
- URL: http://arxiv.org/abs/2201.12383v1
- Date: Fri, 28 Jan 2022 19:24:30 GMT
- Title: Bounding Training Data Reconstruction in Private (Deep) Learning
- Authors: Chuan Guo, Brian Karrer, Kamalika Chaudhuri, Laurens van der Maaten
- Abstract summary: Differential privacy is widely accepted as the de facto method for preventing data leakage in ML.
Existing semantic guarantees for DP focus on membership inference.
We show that two distinct privacy accounting methods -- Renyi differential privacy and Fisher information leakage -- both offer strong semantic protection against data reconstruction attacks.
- Score: 40.86813581191581
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differential privacy is widely accepted as the de facto method for preventing
data leakage in ML, and conventional wisdom suggests that it offers strong
protection against privacy attacks. However, existing semantic guarantees for
DP focus on membership inference, which may overestimate the adversary's
capabilities and is not applicable when membership status itself is
non-sensitive. In this paper, we derive the first semantic guarantees for DP
mechanisms against training data reconstruction attacks under a formal threat
model. We show that two distinct privacy accounting methods -- Renyi
differential privacy and Fisher information leakage -- both offer strong
semantic protection against data reconstruction attacks.
Related papers
- Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Bayes-Nash Generative Privacy Against Membership Inference Attacks [24.330984323956173]
Membership inference attacks (MIAs) expose significant privacy risks by determining whether an individual's data is in a dataset.
We propose a game-theoretic framework that models privacy protection from MIA as a Bayesian game between a defender and an attacker.
We call the defender's data sharing policy thereby obtained Bayes-Nash Generative Privacy (BNGP)
arXiv Detail & Related papers (2024-10-09T20:29:04Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Visual Privacy Auditing with Diffusion Models [52.866433097406656]
We propose a reconstruction attack based on diffusion models (DMs) that assumes adversary access to real-world image priors.
We show that (1) real-world data priors significantly influence reconstruction success, (2) current reconstruction bounds do not model the risk posed by data priors well, and (3) DMs can serve as effective auditing tools for visualizing privacy leakage.
arXiv Detail & Related papers (2024-03-12T12:18:55Z) - Bounding Training Data Reconstruction in DP-SGD [42.36933026300976]
Differentially private training offers a protection which is usually interpreted as a guarantee against membership inference attacks.
By proxy, this guarantee extends to other threats like reconstruction attacks attempting to extract complete training examples.
Recent works provide evidence that if one does not need to protect against membership attacks but instead only wants to protect against training data reconstruction, then utility of private models can be improved.
arXiv Detail & Related papers (2023-02-14T18:02:34Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - Defending against Reconstruction Attacks with R\'enyi Differential
Privacy [72.1188520352079]
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.
Differential privacy is a known solution to such attacks, but is often used with a relatively large privacy budget.
We show that, for a same mechanism, we can derive privacy guarantees for reconstruction attacks that are better than the traditional ones from the literature.
arXiv Detail & Related papers (2022-02-15T18:09:30Z) - Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart
Privacy Attacks [31.34410250008759]
This paper measures the trade-off between model accuracy and privacy losses incurred by reconstruction, tracing and membership attacks.
Experiments show that model accuracies are improved on average by 5-20% compared with baseline mechanisms.
arXiv Detail & Related papers (2020-06-20T15:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.