Bounding Training Data Reconstruction in Private (Deep) Learning
- URL: http://arxiv.org/abs/2201.12383v1
- Date: Fri, 28 Jan 2022 19:24:30 GMT
- Title: Bounding Training Data Reconstruction in Private (Deep) Learning
- Authors: Chuan Guo, Brian Karrer, Kamalika Chaudhuri, Laurens van der Maaten
- Abstract summary: Differential privacy is widely accepted as the de facto method for preventing data leakage in ML.
Existing semantic guarantees for DP focus on membership inference.
We show that two distinct privacy accounting methods -- Renyi differential privacy and Fisher information leakage -- both offer strong semantic protection against data reconstruction attacks.
- Score: 40.86813581191581
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differential privacy is widely accepted as the de facto method for preventing
data leakage in ML, and conventional wisdom suggests that it offers strong
protection against privacy attacks. However, existing semantic guarantees for
DP focus on membership inference, which may overestimate the adversary's
capabilities and is not applicable when membership status itself is
non-sensitive. In this paper, we derive the first semantic guarantees for DP
mechanisms against training data reconstruction attacks under a formal threat
model. We show that two distinct privacy accounting methods -- Renyi
differential privacy and Fisher information leakage -- both offer strong
semantic protection against data reconstruction attacks.
Related papers
- FedAdOb: Privacy-Preserving Federated Deep Learning with Adaptive Obfuscation [26.617708498454743]
Federated learning (FL) has emerged as a collaborative approach that allows multiple clients to jointly learn a machine learning model without sharing their private data.
We propose a novel adaptive obfuscation mechanism, coined FedAdOb, to protect private data without yielding original model performances.
arXiv Detail & Related papers (2024-06-03T08:12:09Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Visual Privacy Auditing with Diffusion Models [52.866433097406656]
We propose a reconstruction attack based on diffusion models (DMs) that assumes adversary access to real-world image priors.
We show that (1) real-world data priors significantly influence reconstruction success, (2) current reconstruction bounds do not model the risk posed by data priors well, and (3) DMs can serve as effective auditing tools for visualizing privacy leakage.
arXiv Detail & Related papers (2024-03-12T12:18:55Z) - Bounding Training Data Reconstruction in DP-SGD [42.36933026300976]
Differentially private training offers a protection which is usually interpreted as a guarantee against membership inference attacks.
By proxy, this guarantee extends to other threats like reconstruction attacks attempting to extract complete training examples.
Recent works provide evidence that if one does not need to protect against membership attacks but instead only wants to protect against training data reconstruction, then utility of private models can be improved.
arXiv Detail & Related papers (2023-02-14T18:02:34Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - Defending against Reconstruction Attacks with R\'enyi Differential
Privacy [72.1188520352079]
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.
Differential privacy is a known solution to such attacks, but is often used with a relatively large privacy budget.
We show that, for a same mechanism, we can derive privacy guarantees for reconstruction attacks that are better than the traditional ones from the literature.
arXiv Detail & Related papers (2022-02-15T18:09:30Z) - Semantics-Preserved Distortion for Personal Privacy Protection in Information Management [65.08939490413037]
This paper suggests a linguistically-grounded approach to distort texts while maintaining semantic integrity.
We present two distinct frameworks for semantic-preserving distortion: a generative approach and a substitutive approach.
We also explore privacy protection in a specific medical information management scenario, showing our method effectively limits sensitive data memorization.
arXiv Detail & Related papers (2022-01-04T04:01:05Z) - Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart
Privacy Attacks [31.34410250008759]
This paper measures the trade-off between model accuracy and privacy losses incurred by reconstruction, tracing and membership attacks.
Experiments show that model accuracies are improved on average by 5-20% compared with baseline mechanisms.
arXiv Detail & Related papers (2020-06-20T15:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.